timfreund's changelog 2015-08-12T05:31:19Z/changelog/atom.xml/Work on 2015-07-20Tim Freund2015-07-20T00:37:02Z2015-07-20T00:37:02Z/changelog/2015-07-20.html
<p>I spent the last few days learning about the <a href="https://www.opensuse.org/en/">openSUSE</a>
<a href="http://ceph.com">Ceph</a> installation process. I ran into some issues, and I’m not
done yet, so these are just my working notes for now. Once complete, I’ll
write up the process on <a href="http://tim.freunds.net/blog">my regular blog</a>.</p>
<h3>Prerequisite: build a tool to build and destroy small clusters quickly</h3>
<p>I needed a way to quickly provision and destroy
virtual machines that were well suited to run small Ceph clusters. I mostly
run <a href="https://www.opensuse.org/en/">libvirt</a> / <a href="http://www.linux-kvm.org/page/Main_Page">kvm</a>
in my home lab, and I didn’t find any solutions tailored to that platform, so
I wrote <a href="https://github.com/timfreund/ceph-libvirt-clusterer">ceph-libvirt-clusterer</a>.</p>
<p><a href="https://github.com/timfreund/ceph-libvirt-clusterer">Ceph-libvirt-clusterer</a>
lets me clone a template virtual machine and attach as many <span class="caps">OSD</span> disks
as I’d like in the process. I’m really happy with the tool
considering that I only have a day’s worth of work in it, and I got to
learn some details of the libvirt <span class="caps">API</span> and python bindings in the process.</p>
<h3>Build a template machine</h3>
<p>I built a template machine with
<a href="https://en.opensuse.org/Portal:Tumbleweed">openSUSE’s tumbleweed</a> and
completed the following preliminary configurations:</p>
<ul>
<li>created ceph user</li>
<li>ceph user has a <span class="caps">SSH</span> key</li>
<li>ceph user’s public key is in the ceph user’s authorized_keys file</li>
<li>ceph user is configured for passwordless sudo</li>
<li>emacs is installed (not strictly necessary :-) )</li>
</ul>
<h3>Provision a cluster</h3>
<p>I used ceph-libvirt-clusterer to create a four node cluster, and each node had
two <span class="caps">8GB</span> <span class="caps">OSD</span> drives attached.</p>
<p><img src="/media/images/articles/cephlvc-running-machines.png"/></p>
<h3>Install Ceph with ceph-deploy</h3>
<p>Once the machines were built, I followed
the <a href="https://www.suse.com/documentation/ses-1/book_storage_admin/data/ceph_install_ceph-deploy.html"><span class="caps">SUSE</span> Enterprise Storage Documentation</a></p>
<p>The ceph packages aren’t yet in the mainline repositories, so I added it
to the admin node:</p>
<div class="codebox"><figure class="code"><div class="highlight"><pre><span class="nv">$ </span>sudo zypper ar -f http://download.opensuse.org/repositories/filesystems:/ceph/openSUSE_Tumbleweed/ ceph<br /><span class="nv">$ </span>sudo zypper update<br />Retrieving repository <span class="s1">'ceph'</span> metadata ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------<span class="o">[</span><span class="se">\]</span><br /> <br />New repository or package signing key received:<br /> <br /> Repository: ceph<br /> Key Name: filesystems <span class="caps">OBS</span> Project <filesystems@build.opensuse.org><br /> Key Fingerprint: <span class="caps">B1FB5374</span> 87204722 <span class="caps">05FA6019</span> <span class="caps">98C97FE7</span> 324E6311<br /> Key Created: Mon 12 May 2014 10:34:19 <span class="caps">AM</span> <span class="caps">EDT</span><br /> Key Expires: Wed 20 Jul 2016 10:34:19 <span class="caps">AM</span> <span class="caps">EDT</span><br /> Rpm Name: gpg-pubkey-324e6311-5370dbeb<br /> <br /><br />Do you want to reject the key, trust temporarily, or trust always? <span class="o">[</span>r/t/a/? shows all options<span class="o">]</span> <span class="o">(</span>r<span class="o">)</span>: a<br />Retrieving repository <span class="s1">'ceph'</span> metadata .........................................................................................................................................................................<span class="o">[</span><span class="k">done</span><span class="o">]</span><br />Building repository <span class="s1">'ceph'</span> cache ..............................................................................................................................................................................<span class="o">[</span><span class="k">done</span><span class="o">]</span><br />Loading repository data...<br />Reading installed packages...<br /></pre></div><br /><figcaption>Bash</figcaption></figure></div>
<p>And ceph packages were visible:</p>
<div class="codebox"><figure class="code"><div class="highlight"><pre>tim@linux-7d21:~> zypper search ceph<br />Loading repository data...<br />Reading installed packages...<br /> <br />S | Name | Summary | Type<br />--+--------------------+---------------------------------------------------+-----------<br /> | ceph | User space components of the Ceph file system | package<br /> | ceph | User space components of the Ceph file system | srcpackage<br /> | ceph-common | Ceph Common | package<br /> | ceph-deploy | Admin and deploy tool <span class="k">for </span>Ceph | package<br /> | ceph-deploy | Admin and deploy tool <span class="k">for </span>Ceph | srcpackage<br /> | ceph-devel-compat | Compatibility package <span class="k">for </span>Ceph headers | package<br /> | ceph-fuse | Ceph fuse-based client | package<br /> | ceph-libs-compat | Meta package to include ceph libraries | package<br /> | ceph-radosgw | Rados <span class="caps">REST</span> gateway | package<br /> | ceph-test | Ceph benchmarks and <span class="nb">test </span>tools | package<br /> | libcephfs1 | Ceph distributed file system client library | package<br /> | libcephfs1-devel | Ceph distributed file system headers | package<br /> | python-ceph-compat | Compatibility package <span class="k">for </span>Cephs python libraries | package<br /> | python-cephfs | Python libraries <span class="k">for </span>Ceph distributed file system | package<br /></pre></div><br /><figcaption>Bash</figcaption></figure></div>
<h4>First issue: python was missing on the other nodes</h4>
<p>When I installed ceph-deploy on the admin node, python was also
installed. The other nodes were still running with a bare minimum
configuration from the tumbleweed install, so python was missing, and
ceph-deploy’s install step failed.</p>
<p>I installed <a href="http://www.ansible.com/home">Ansible</a> to correct the problem on all
nodes simultaneously, but Ansible requires python on the remote side, too.
That meant I had to manually install python on the remaining three nodes just
like sysadmins had to do years ago.</p>
<h4>Second issue: all nodes need the <span class="caps">OBS</span> repository</h4>
<p>I didn’t add the <span class="caps">OBS</span> repository to the remaining three nodes because I
wanted to see if ceph-deploy would add it automatically. I didn’t expect
that to be the case, but since this version of ceph-deploy came directly from
<span class="caps">SUSE</span>, there was a chance.</p>
<p>Fortunately Ansible works now:</p>
<div class="codebox"><figure class="code"><div class="highlight"><pre>ceph@linux-7d21:~/tinyceph> ansible -i ansible-inventory all -a <span class="s2">"sudo zypper ar -f http://download.opensuse.org/repositories/filesystems:/ceph/openSUSE_Tumbleweed/ ceph"</span><br />192.168.122.122 | success | <span class="nv">rc</span><span class="o">=</span>0 >><br />Adding repository <span class="s1">'ceph'</span> <span class="o">[</span>......done<span class="o">]</span><br />Repository <span class="s1">'ceph'</span> successfully added<br />Enabled : Yes<br />Autorefresh : Yes<br /><span class="caps">GPG</span> Check : Yes<br /><span class="caps">URI</span> : http://download.opensuse.org/repositories/filesystems:/ceph/openSUSE_Tumbleweed/<br /> <br /><span class="c"># and three more nodes worth of output...</span><br /> <br />ceph@linux-7d21:~/tinyceph> ansible -i ansible-inventory all -a <span class="s2">"sudo zypper --gpg-auto-import-keys update"</span><br /></pre></div><br /><figcaption>Bash</figcaption></figure></div>
<p>Once both of these commands completed, <code>ceph-deploy install</code> worked as expected.</p>
<h4>Third issue: I was using <span class="caps">IP</span> addresses</h4>
<p><code>ceph-deploy new</code> complains when provided with <span class="caps">IP</span> addresses:</p>
<div class="codebox"><figure class="code"><div class="highlight"><pre>ceph@linux-7d21:~/tinyceph> ceph-deploy new 192.168.122.121 192.168.122.122 192.168.122.123 192.168.122.124<br />usage: ceph-deploy new <span class="o">[</span>-h<span class="o">]</span> <span class="o">[</span>--no-ssh-copykey<span class="o">]</span> <span class="o">[</span>--fsid <span class="caps">FSID</span><span class="o">]</span><br /> <span class="o">[</span>--cluster-network CLUSTER_NETWORK<span class="o">]</span><br /> <span class="o">[</span>--public-network PUBLIC_NETWORK<span class="o">]</span><br /> <span class="caps">MON</span> <span class="o">[</span><span class="caps">MON</span> ...<span class="o">]</span><br />ceph-deploy new: error: 192.168.122.121 must be a hostname not an <span class="caps">IP</span><br /></pre></div><br /><figcaption>Bash</figcaption></figure></div>
<p>In the future, it’d be pretty cool if ceph-libvirt-clusterer supported
updating <span class="caps">DNS</span> records so I didn’t need to resort to the host file
ansible playbook that I used today:</p>
<div class="codebox"><figure class="code"><div class="highlight"><pre><span class="nn">---</span><br /><span class="p-Indicator">-</span> <span class="l-Scalar-Plain">hosts</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">all</span><br /> <span class="l-Scalar-Plain">sudo</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">yes</span><br /> <span class="l-Scalar-Plain">tasks</span><span class="p-Indicator">:</span><br /> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">add tinyceph-00</span><br /> <span class="l-Scalar-Plain">lineinfile</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">dest=/etc/hosts line='192.168.122.121 tinyceph-00'</span><br /> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">add tinyceph-01</span><br /> <span class="l-Scalar-Plain">lineinfile</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">dest=/etc/hosts line='192.168.122.122 tinyceph-01'</span><br /> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">add tinyceph-02</span><br /> <span class="l-Scalar-Plain">lineinfile</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">dest=/etc/hosts line='192.168.122.123 tinyceph-02'</span><br /> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">add tinyceph-03</span><br /> <span class="l-Scalar-Plain">lineinfile</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">dest=/etc/hosts line='192.168.122.124 tinyceph-03'</span><br /><span class="p-Indicator">-</span> <span class="l-Scalar-Plain">hosts</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">192.168.122.121</span><br /> <span class="l-Scalar-Plain">sudo</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">yes</span><br /> <span class="l-Scalar-Plain">tasks</span><span class="p-Indicator">:</span><br /> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">update hostname</span><br /> <span class="l-Scalar-Plain">lineinfile</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">dest=/etc/hostname line='tinyceph-00' state=present regexp=linux-7d21</span><br /><span class="p-Indicator">-</span> <span class="l-Scalar-Plain">hosts</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">192.168.122.122</span><br /> <span class="l-Scalar-Plain">sudo</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">yes</span><br /> <span class="l-Scalar-Plain">tasks</span><span class="p-Indicator">:</span><br /> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">update hostname</span><br /> <span class="l-Scalar-Plain">lineinfile</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">dest=/etc/hostname line='tinyceph-01' state=present regexp=linux-7d21</span><br /><span class="p-Indicator">-</span> <span class="l-Scalar-Plain">hosts</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">192.168.122.123</span><br /> <span class="l-Scalar-Plain">sudo</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">yes</span><br /> <span class="l-Scalar-Plain">tasks</span><span class="p-Indicator">:</span><br /> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">update hostname</span><br /> <span class="l-Scalar-Plain">lineinfile</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">dest=/etc/hostname line='tinyceph-02' state=present regexp=linux-7d21</span><br /><span class="p-Indicator">-</span> <span class="l-Scalar-Plain">hosts</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">192.168.122.124</span><br /> <span class="l-Scalar-Plain">sudo</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">yes</span><br /> <span class="l-Scalar-Plain">tasks</span><span class="p-Indicator">:</span><br /> <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">update hostname</span><br /> <span class="l-Scalar-Plain">lineinfile</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">dest=/etc/hostname line='tinyceph-03' state=present regexp=linux-7d21</span><br /></pre></div><br /><figcaption><span class="caps">YAML</span></figcaption></figure></div>
<h4>Fourth issue: tumbleweed uses systemd, but ceph-deploy doesn’t expect that</h4>
<div class="codebox"><figure class="code"><div class="highlight"><pre><span class="o">[</span>ceph_deploy.mon<span class="o">][</span><span class="caps">INFO</span> <span class="o">]</span> distro info: openSUSE 20150714 x86_64<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">DEBUG</span> <span class="o">]</span> determining <span class="k">if </span>provided host has same hostname in remote<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">DEBUG</span> <span class="o">]</span> get remote short hostname<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">DEBUG</span> <span class="o">]</span> deploying mon to tinyceph-03<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">DEBUG</span> <span class="o">]</span> get remote short hostname<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">DEBUG</span> <span class="o">]</span> remote hostname: tinyceph-03<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">DEBUG</span> <span class="o">]</span> write cluster configuration to /etc/ceph/<span class="o">{</span>cluster<span class="o">}</span>.conf<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">DEBUG</span> <span class="o">]</span> create the mon path <span class="k">if </span>it does not exist<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">DEBUG</span> <span class="o">]</span> checking <span class="k">for done </span>path: /var/lib/ceph/mon/ceph-tinyceph-03/done<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">DEBUG</span> <span class="o">]</span> create a <span class="k">done </span>file to avoid re-doing the mon deployment<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">DEBUG</span> <span class="o">]</span> create the init path <span class="k">if </span>it does not exist<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">INFO</span> <span class="o">]</span> Running <span class="nb">command</span>: sudo /etc/init.d/ceph -c /etc/ceph/ceph.conf start mon.tinyceph-03<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> Traceback <span class="o">(</span>most recent call last<span class="o">)</span>:<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> File <span class="s2">"/usr/lib/python2.7/site-packages/remoto/process.py"</span>, line 94, in run<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> reporting<span class="o">(</span>conn, result, timeout<span class="o">)</span><br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> File <span class="s2">"/usr/lib/python2.7/site-packages/remoto/log.py"</span>, line 13, in reporting<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> <span class="nv">received</span> <span class="o">=</span> result.receive<span class="o">(</span>timeout<span class="o">)</span><br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> File <span class="s2">"/usr/lib/python2.7/site-packages/execnet/gateway_base.py"</span>, line 701, in receive<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> raise self._getremoteerror<span class="o">()</span> or EOFError<span class="o">()</span><br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> RemoteError: Traceback <span class="o">(</span>most recent call last<span class="o">)</span>:<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> File <span class="s2">"<string>"</span>, line 1033, in executetask<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> File <span class="s2">"<remote exec>"</span>, line 12, in _remote_run<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> File <span class="s2">"/usr/lib64/python2.7/subprocess.py"</span>, line 710, in __init__<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> errread, errwrite<span class="o">)</span><br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> File <span class="s2">"/usr/lib64/python2.7/subprocess.py"</span>, line 1335, in _execute_child<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> raise child_exception<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> OSError: <span class="o">[</span>Errno 2<span class="o">]</span> No such file or directory<br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span><br /><span class="o">[</span>tinyceph-03<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span><br /><span class="o">[</span>ceph_deploy.mon<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> Failed to execute <span class="nb">command</span>: /etc/init.d/ceph -c /etc/ceph/ceph.conf start mon.tinyceph-03<br /><span class="o">[</span>ceph_deploy<span class="o">][</span><span class="caps">ERROR</span> <span class="o">]</span> GenericError: Failed to create 4 monitors<br /></pre></div><br /><figcaption>Bash</figcaption></figure></div>
<p>Sure enough, a little manual inspection revealed no file at <code>/etc/init.d/ceph</code> and systemd integration:</p>
<div class="codebox"><figure class="code"><div class="highlight"><pre>ceph@tinyceph-00:~/tinyceph> ls -la /etc/init.d/ceph<br />ls: cannot access /etc/init.d/ceph: No such file or directory<br />ceph@tinyceph-00:~/tinyceph> sudo service ceph status<br />* ceph.target - ceph target allowing to start/stop all ceph*@.service instances at once<br /> Loaded: loaded <span class="o">(</span>/usr/lib/systemd/system/ceph.target; disabled; vendor preset: disabled<span class="o">)</span><br /> Active: inactive <span class="o">(</span>dead<span class="o">)</span><br /> <br />Jul 19 23:50:35 tinyceph-00 systemd<span class="o">[</span>1<span class="o">]</span>: Reached target ceph target allowing to start/stop all ceph*@.service instances at once.<br />Jul 19 23:50:35 tinyceph-00 systemd<span class="o">[</span>1<span class="o">]</span>: Starting ceph target allowing to start/stop all ceph*@.service instances at once.<br />Jul 19 23:50:47 tinyceph-00 systemd<span class="o">[</span>1<span class="o">]</span>: Stopped target ceph target allowing to start/stop all ceph*@.service instances at once.<br />Jul 19 23:50:47 tinyceph-00 systemd<span class="o">[</span>1<span class="o">]</span>: Stopping ceph target allowing to start/stop all ceph*@.service instances at once.<br />ceph@tinyceph-00:~/tinyceph> sudo service ceph start<br />ceph@tinyceph-00:~/tinyceph> sudo service ceph status<br />* ceph.target - ceph target allowing to start/stop all ceph*@.service instances at once<br /> Loaded: loaded <span class="o">(</span>/usr/lib/systemd/system/ceph.target; disabled; vendor preset: disabled<span class="o">)</span><br /> Active: active since Mon 2015-07-20 00:24:01 <span class="caps">EDT</span>; 4s ago<br /> <br />Jul 20 00:24:01 tinyceph-00 systemd<span class="o">[</span>1<span class="o">]</span>: Reached target ceph target allowing to start/stop all ceph*@.service instances at once.<br />Jul 20 00:24:01 tinyceph-00 systemd<span class="o">[</span>1<span class="o">]</span>: Starting ceph target allowing to start/stop all ceph*@.service instances at once.<br /></pre></div><br /><figcaption>Bash</figcaption></figure></div>
<p>I learned that this is a <a href="https://bugzilla.opensuse.org/show_bug.cgi?id=937120">known bug</a>,
and I’ll try all of this again with an older version of openSUSE.</p>
<p>… and that’s where I’m calling it a night. I’ll be back at it this week.</p> Work on 2014-08-02Tim Freund2014-08-03T00:17:55Z2014-08-03T00:17:55Z/changelog/2014-08-02.html
<p>Spent 2 hours getting this <a href="https://bitbucket.org/conservancy/kallithea/pull-request/25/add-user-interface-and-database-components/diff">pull request</a> ready.</p> Work on 2014-07-30Tim Freund2014-07-31T01:57:44Z2014-07-31T01:57:44Z/changelog/2014-07-30.html
<p>Last time I worked on Kallithea’s <span class="caps">CI</span>, I got some errors. On a fresh
Ubuntu 14.04 <span class="caps">VM</span> without docker, I get the following test results:</p>
<h2>In a Virtual Machine</h2>
<p><strong>sqlite: 0 errors, 2 skipped</strong></p>
<p><strong>mysql: 0 errors, 2 skipped</strong></p>
<p><strong>postgresql: 1 error, 2 skipped</strong></p>
<p>details:</p>
<pre><code>======================================================================
ERROR: test_index_with_anonymous_access_disabled (kallithea.tests.functional.test_home.TestHomeController)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/packer/src/kallithea-pg/kallithea/tests/functional/test_home.py", line 43, in test_index_with_anonymous_access_disabled
status=302)
File "/home/packer/src/kallithea/.venv/local/lib/python2.7/site-packages/WebTest-1.4.3-py2.7.egg/webtest/app.py", line 759, in get
expect_errors=expect_errors)
File "/home/packer/src/kallithea/.venv/local/lib/python2.7/site-packages/WebTest-1.4.3-py2.7.egg/webtest/app.py", line 1121, in do_request
self._check_status(status, res)
File "/home/packer/src/kallithea/.venv/local/lib/python2.7/site-packages/WebTest-1.4.3-py2.7.egg/webtest/app.py", line 1160, in _check_status
"Bad response: %s (not %s)", res_status, status)
AppError: Bad response: 200 OK (not 302)
----------------------------------------------------------------------
Ran 1482 tests in 311.450s
FAILED (SKIP=2, errors=1)
</code></pre>
<h2>In a Docker Container</h2>
<p><strong>sqlite </strong></p>
<p>I’m betting that these messages are a canary that will help figure out the sqlite failures:</p>
<pre><code>kallithea_1 | not trusting file /code/.hg/hgrc from untrusted user 1000, group 1000
kallithea_1 | not trusting file /tmp/rc_test_lPm4Rl/vcs_test_hg/.hg/hgrc from untrusted user 502, group root
kallithea_1 | not trusting file /tmp/rc_test_lPm4Rl/vcs_test_hg/.hg/hgrc from untrusted user 502, group root
kallithea_1 | not trusting file /tmp/rc_test_lPm4Rl/vcs_test_hg/.hg/hgrc from untrusted user 502, group root
</code></pre>
<p>Here’s the full list of error details:</p>
<pre><code>kallithea_1 | ======================================================================
kallithea_1 | ERROR: test_index_with_anonymous_access_disabled (kallithea.tests.functional.test_home.TestHomeController)
kallithea_1 | ----------------------------------------------------------------------
kallithea_1 | Traceback (most recent call last):
kallithea_1 | File "/code/kallithea/tests/functional/test_home.py", line 43, in test_index_with_anonymous_access_disabled
kallithea_1 | status=302)
kallithea_1 | File "/usr/local/lib/python2.7/dist-packages/webtest/app.py", line 759, in get
kallithea_1 | expect_errors=expect_errors)
kallithea_1 | File "/usr/local/lib/python2.7/dist-packages/webtest/app.py", line 1121, in do_request
kallithea_1 | self._check_status(status, res)
kallithea_1 | File "/usr/local/lib/python2.7/dist-packages/webtest/app.py", line 1160, in _check_status
kallithea_1 | "Bad response: %s (not %s)", res_status, status)
kallithea_1 | AppError: Bad response: 200 OK (not 302)
kallithea_1 |
kallithea_1 | ======================================================================
kallithea_1 | FAIL: test_create_non_ascii (kallithea.tests.functional.test_admin_repos.TestAdminReposControllerGIT)
kallithea_1 | ----------------------------------------------------------------------
kallithea_1 | Traceback (most recent call last):
kallithea_1 | File "/code/kallithea/tests/functional/test_admin_repos.py", line 103, in test_create_non_ascii
kallithea_1 | self.assertEqual(response.json, {u'result': True})
kallithea_1 | AssertionError: {u'result': False} != {u'result': True}
kallithea_1 | - {u'result': False}
kallithea_1 | ? ^^^^
kallithea_1 |
kallithea_1 | + {u'result': True}
kallithea_1 | ? ^^^
kallithea_1 |
kallithea_1 | """Fail immediately, with the given message."""
kallithea_1 | >> raise self.failureException("{u'result': False} != {u'result': True}\n- {u'result': False}\n? ^^^^\n\n+ {u'result': True}\n? ^^^\n")
kallithea_1 |
kallithea_1 |
kallithea_1 | ======================================================================
kallithea_1 | FAIL: test_delete_non_ascii (kallithea.tests.functional.test_admin_repos.TestAdminReposControllerGIT)
kallithea_1 | ----------------------------------------------------------------------
kallithea_1 | Traceback (most recent call last):
kallithea_1 | File "/code/kallithea/tests/functional/test_admin_repos.py", line 420, in test_delete_non_ascii
kallithea_1 | self.assertEqual(response.json, {u'result': True})
kallithea_1 | AssertionError: {u'result': False} != {u'result': True}
kallithea_1 | - {u'result': False}
kallithea_1 | ? ^^^^
kallithea_1 |
kallithea_1 | + {u'result': True}
kallithea_1 | ? ^^^
kallithea_1 |
kallithea_1 | """Fail immediately, with the given message."""
kallithea_1 | >> raise self.failureException("{u'result': False} != {u'result': True}\n- {u'result': False}\n? ^^^^\n\n+ {u'result': True}\n? ^^^\n")
kallithea_1 |
kallithea_1 |
kallithea_1 | ======================================================================
kallithea_1 | FAIL: test_create_non_ascii (kallithea.tests.functional.test_admin_repos.TestAdminReposControllerHG)
kallithea_1 | ----------------------------------------------------------------------
kallithea_1 | Traceback (most recent call last):
kallithea_1 | File "/code/kallithea/tests/functional/test_admin_repos.py", line 103, in test_create_non_ascii
kallithea_1 | self.assertEqual(response.json, {u'result': True})
kallithea_1 | AssertionError: {u'result': False} != {u'result': True}
kallithea_1 | - {u'result': False}
kallithea_1 | ? ^^^^
kallithea_1 |
kallithea_1 | + {u'result': True}
kallithea_1 | ? ^^^
kallithea_1 |
kallithea_1 | """Fail immediately, with the given message."""
kallithea_1 | >> raise self.failureException("{u'result': False} != {u'result': True}\n- {u'result': False}\n? ^^^^\n\n+ {u'result': True}\n? ^^^\n")
kallithea_1 |
kallithea_1 |
kallithea_1 | ======================================================================
kallithea_1 | FAIL: test_delete_non_ascii (kallithea.tests.functional.test_admin_repos.TestAdminReposControllerHG)
kallithea_1 | ----------------------------------------------------------------------
kallithea_1 | Traceback (most recent call last):
kallithea_1 | File "/code/kallithea/tests/functional/test_admin_repos.py", line 420, in test_delete_non_ascii
kallithea_1 | self.assertEqual(response.json, {u'result': True})
kallithea_1 | AssertionError: {u'result': False} != {u'result': True}
kallithea_1 | - {u'result': False}
kallithea_1 | ? ^^^^
kallithea_1 |
kallithea_1 | + {u'result': True}
kallithea_1 | ? ^^^
kallithea_1 |
kallithea_1 | """Fail immediately, with the given message."""
kallithea_1 | >> raise self.failureException("{u'result': False} != {u'result': True}\n- {u'result': False}\n? ^^^^\n\n+ {u'result': True}\n? ^^^\n")
kallithea_1 |
kallithea_1 |
kallithea_1 | ----------------------------------------------------------------------
kallithea_1 | Ran 1479 tests in 281.475s
kallithea_1 |
kallithea_1 | FAILED (SKIP=2, errors=1, failures=4)
</code></pre>
<p>Boo… file permissions were not the problem.</p>
<p>I just removed the following from my fig config:</p>
<pre><code>volumes:
- .:/code/
</code></pre>
<p>And added the following to my Dockerfile:</p>
<pre><code>RUN useradd -d /home/kallithea -m -s /bin/bash -u 2000 kallithea
RUN chown -R kallithea /code
USER kallithea
</code></pre>
<p>… to ensure that there was no weirdness induced from running as root
and running against files that were owned by a different <span class="caps">UID</span> than the
test process, and I got the same four errors. Something’s up when
running in this container.</p>
<p>All the failing tests include the string “non_ascii” in their names.</p>
<p>Let’s see what <code>locale</code> tells us on the virtual machine:</p>
<pre><code>packer@example:~/src/kallithea$ locale
LANG=en_US.utf8
LANGUAGE=en_US:
LC_CTYPE="en_US.utf8"
LC_NUMERIC="en_US.utf8"
LC_TIME="en_US.utf8"
LC_COLLATE="en_US.utf8"
LC_MONETARY="en_US.utf8"
LC_MESSAGES="en_US.utf8"
LC_PAPER="en_US.utf8"
LC_NAME="en_US.utf8"
LC_ADDRESS="en_US.utf8"
LC_TELEPHONE="en_US.utf8"
LC_MEASUREMENT="en_US.utf8"
LC_IDENTIFICATION="en_US.utf8"
LC_ALL=
</code></pre>
<p>… and in the Docker container:</p>
<pre><code>kallithea@56a8a9afa48d:/code$ locale
LANG=
LANGUAGE=
LC_CTYPE="POSIX"
LC_NUMERIC="POSIX"
LC_TIME="POSIX"
LC_COLLATE="POSIX"
LC_MONETARY="POSIX"
LC_MESSAGES="POSIX"
LC_PAPER="POSIX"
LC_NAME="POSIX"
LC_ADDRESS="POSIX"
LC_TELEPHONE="POSIX"
LC_MEASUREMENT="POSIX"
LC_IDENTIFICATION="POSIX"
LC_ALL=
</code></pre>
<p>The Docker container doesn’t include <code>en_US.utf8</code>, but it does include
<code>C.UTF-8</code>… let’s give that a spin.</p>
<pre><code>kallithea_1 | Ran 1479 tests in 286.144s
kallithea_1 |
kallithea_1 | OK (SKIP=2)
kallithea_kallithea_1 exited with code 0
</code></pre>
<p><span class="caps">WOO</span>!</p> Work on 2014-07-12Tim Freund2014-07-12T23:56:45Z2014-07-12T23:56:45Z/changelog/2014-07-12.html
<p>The unit tests now run inside of containers managed by fig. I wrote
two scripts to facilitate the execution:</p>
<ul>
<li>integration-configs/fig-config-glue.py: reads environment variables set by fig to create a sqlalchemy <span class="caps">URL</span> and update an ini file with it.</li>
<li>integration-configs/execute_tests.sh: runs the above script, updating test.ini, then sleeps for 10 seconds while the database starts, then runs nosetests</li>
</ul>
<p>I had been switching between the various databases just by using a
different fig configuration file, but that is insufficient. Fig must
be invoked with both of the following arguments:</p>
<pre><code>fig -f fig-${DB_TYPE} -p kallithea-${DB_TYPE}
</code></pre>
<p>If a project name isn’t specified, fig won’t differentiate between the
various database containers (all named “db” in the configs).</p>
<p>Of course, different numbers of tests fail in each configuration
(including some sqlite tests that don’t fail when I run it directly on
my machine without docker in the middle), so there’s still some
testing and adjusting to complete.</p>
<p>Another annoyance: when the tests complete, fig shuts down both containers,
but fig’s exit code is always zero even if one of the containers exited
with a non-zero return code. Going to ask the team if that’s by design
or if they’re open to changing it. As is, I’ll need to either parse
the nose output or parse the output of <code>fig ps</code> to give an appropriate
exit code to the build server.</p> Work on 2014-07-11Tim Freund2014-07-11T23:08:49Z2014-07-11T23:08:49Z/changelog/2014-07-11.html
<p>New info regarding Docker, <span class="caps">ENTRYPOINT</span>, and <span class="caps">CMD</span>.</p>
<p><a href="http://docs.docker.com/reference/builder/#cmd">Read the <span class="caps">CMD</span> docs carefully</a></p>
<p>I thought I could use <span class="caps">ENTRYPOINT</span> to load up environment variables and
do some voodoo with the ini file to get the database connection
configured correctly:</p>
<pre><code>ENTRYPOINT ["/bin/bash", "--rcfile", "/code/.figrc", "-c"]
</code></pre>
<p>So then anything I ran would run after the <code>/code/.figrc</code> loaded.
Except that doesn’t happen. Read the <span class="caps">CMD</span> docs again, and you’ll see
that if the first argument is an executable, it will be executed
via the following: <code>/bin/sh -c</code>, so no <code>/code/.figrc</code>.</p>
<p>/me sighs</p>
<p>/me comes back after 15 minutes</p>
<p>These are not general purpose images. They’re specifically to run test
suites. Why do I care about them working for every case? I can just
write a wrapper script and call it a day.</p> Work on 2014-07-10Tim Freund2014-07-10T00:08:49Z2014-07-10T00:08:49Z/changelog/2014-07-10.html
<p>I spent about an hour tonight working on flexible configurations for
testing <a href="https://kallithea-scm.org/">Kallithea</a> against various
databases using <a href="http://orchardup.github.io/fig/">Fig</a> and
<a href="http://www.docker.com/">Docker</a></p>
<p>Fig handles some of the dirty work of linking together Docker
containers. Linked containers get environment variables set to define
endpoints of the other containers. They use
<a href="http://orchardup.github.io/fig/django.html">Django as an example</a>
and things look pretty easy: since the configuration file is written
in python, we can just call <code>os.environ.get('DB_1_PORT_5432_TCP_PORT')</code>.</p>
<p>No such luck with Pylons and Pyramid, though: there we use an ini file
for configuration. I ran into a few bumps, though.</p>
<p>The <code>paster serve</code> command provides an avenue for command line
configuration: <code>var=value</code> can be repeated to pass in configuration
options on the command line, and the named vars can be referenced in
the ini file with %(var)s. That’s good.</p>
<p>Fig doesn’t seem to support environment variables in it’s <span class="caps">YAML</span>
configuration files, so <code>paster serve test.ini
dbhost=$DB_1_PORT_5432_TCP_ADDR</code> results in a literal string
“DB_1_PORT_5432_TCP_ADDR” in the configuration. That’s bad,
but it can be fixed with a wrapper script.</p>
<p>Kallithea’s setup-db command doesn’t support the same <code>var=value</code>
setting on the command line that <code>paster serve</code> supports. That’s bad,
but the wrapper script can rewrite the configuration files rather than
pass in values via arguments. That’s where I’m leaving off for tonight.</p>
<p>One other dangling question: I tried putting my Dockerfile and
Fig yaml configurations in a subdirectory to keep the project root
uncluttered, but it didn’t look like Docker liked using <code>..</code> in place
of <code>.</code>. I need to confirm that: there’s a chance that something else
was out of line that I didn’t notice.</p>
<p><span class="caps">EDIT</span>: turns out that
<a href="https://github.com/dotcloud/docker/issues/2745">relative paths really aren’t allowed</a>.
That didn’t take long to find.</p> Work on 2014-07-04Tim Freund2014-07-04T00:44:53Z2014-07-04T00:44:53Z/changelog/2014-07-04.html
<p>Investigating how to build authentic testing hosts that look and act
just like <a href="http://ci.openstack.org/nodepool/">Nodepool</a> does.</p>
<p>When Nodepool is updating base images, it copies all files found at
<a href="https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/nodepool/scripts/">openstack-infra/config/modules/openstack_project/files/nodepool/scripts</a>
to <code>/opt/nodepool-scripts</code>.</p>
<p>Each base image type,
<a href="https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/templates/nodepool/nodepool.yaml.erb">defined in Nodepool’s <span class="caps">YAML</span> configuration file</a>,
includes a setup attribute that matches one of the scripts. The
matching script is executed in an environment that includes any
NODEPOOL_ variables present in the Nodepol daemon’s environment. From
all that I’ve seen, this typically only includes NODEPOOL_SSH_KEY.
(See <a href="https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/manifests/jenkins_dev.pp">jenkins_dev.pp</a>)</p>
<p>So to build replica nodes for personal use, I should just need to copy
the scripts to /opt/nodepool-scripts and execute the right one in my
packer provisioning configuration.</p> Work on 2014-06-18Tim Freund2014-06-18T00:05:20Z2014-06-18T00:05:20Z/changelog/2014-06-18.html
<p>Building a system at work to create disposable machines for developer
and <span class="caps">QA</span> use that will match production. The process looks like this:</p>
<ul>
<li>Query Foreman for Puppet classes used by a hostgroup</li>
<li>Create a Puppetfile with some knowledge about where we keep modules and the names that Foreman provides</li>
<li>Run r10k to sync the modules</li>
</ul>
<p>I did a bunch of work to extract some metadata from the Modulefiles,
but that was completely unnecessary and thus stupid. Never hurts to
take a step back and draw a process before jumping down a rabbit hole.</p>
<p>I have enough information that I could skip creating the Puppetfile
all together and just clone the repo myself, but I think I like the idea
of using r10k to install modules from the Puppetfile for two reasons:</p>
<ol>
<li>It ensures that stale modules are purged gracefully. (I would have just scorched the Earth and deleted the entire modules directory)</li>
<li>It produces an artifact that can be used by other standard tools if necessary.</li>
</ol> Work on 2014-06-11Tim Freund2014-06-11T06:45:41Z2014-06-11T06:45:41Z/changelog/2014-06-11.html
<p>Investigating the
<a href="http://ci.openstack.org/devstack-gate.html">Devstack Gate</a>
documentation. Seems to be more out of date than I originally
expected, or I don’t understand where things run. It mentions a
matrix job called devstack-update-vm-image, but reviewing the
<a href="https://jenkins.openstack.org/view/All/">job list</a> shows that
jobs matching that description haven’t run in 9-10 months:</p>
<ul>
<li>devstack-update-vm-image-hpcloud-az1</li>
<li>devstack-update-vm-image-hpcloud-az2</li>
<li>devstack-update-vm-image-hpcloud-az3</li>
<li>devstack-update-vm-image-rackspace</li>
<li>devstack-update-vm-image-rackspace-dfw</li>
<li>devstack-update-vm-image-rackspace-ord</li>
</ul>
<p>Looking further, all devstack-update-vm-image<em> and
devstack-check-vms-</em> jobs are disabled.</p>
<p>I suspect that all of this work migrated to the
<a href="http://ci.openstack.org/nodepool.html">Nodepool</a> project.</p>
<p><a href="http://nodepool.openstack.org/">nodepool.openstack.org</a> contains a
directory listing with logs that have recent time stamps. Promising.</p>
<p>The <a href="http://nodepool.openstack.org/image.log.2014-06-04">log output</a>
(warning, <span class="caps">20MB</span>) seems to confirm my suspicion.</p>
<p>Found the <a href="https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/templates/nodepool">provider configurations minus credentials</a>
and the <a href="https://git.openstack.org/cgit/openstack-infra/config/tree/modules/openstack_project/files/nodepool/scripts">bootstrap scripts</a>.</p>
<p>I sent an email to the openstack-infra mailing list for clarification.</p> Work on 2014-06-01Tim Freund2014-06-01T11:17:13Z2014-06-01T11:17:13Z/changelog/2014-06-01.html
<p><strong> Fix broken link in openstack-infra/config/doc/source/devstack-gate.rst </strong></p>
<p>The “At a Glance” section links to devstack_launch_slave.pp, which was deleted.</p>
<ul>
<li>Bugs go here for openstack-infra/config: <a href="https://bugs.launchpad.net/openstack-ci">https://bugs.launchpad.net/openstack-ci</a></li>
<li>Bug at: <a href="https://bugs.launchpad.net/openstack-ci/+bug/1325379">https://bugs.launchpad.net/openstack-ci/+bug/1325379</a></li>
<li>Review at: <a href="https://review.openstack.org/#/c/97110/">https://review.openstack.org/#/c/97110/</a></li>
</ul> Work on 2014-05-31Tim Freund2014-05-31T02:17:26Z2014-05-31T02:17:26Z/changelog/2014-05-31.html
<p>This is a new experiment borrowed from
<a href="http://dachary.org/">Loic Dachary</a> where I plan to keep notes on any
open source or otherwise public work that I do. I may write notes
that solve a problem for others, so it’s worth doing this out in the open.</p>
<p>With that said, writing that I contemplate and edit more rigorously will still
appear on <a href="/blog">my blog</a>.</p>
<h3>Topics for Upstream University mentoring meeting:</h3>
<p><strong>https://review.openstack.org/#/c/95325/ is merged!</strong></p>
<ul>
<li>Always check for inline comments</li>
</ul>
<p><strong>py27 test failure resolution</strong></p>
<ul>
<li>py27 test failure encountered in Jenkins is a known bugbut several
hours were lost trying to recreate it.</li>
<li>https://bugs.launchpad.net/designate/+bug/1321873</li>
<li>A Google search is insufficient: check the project bug list, too!</li>
<li>Kiall and Vinod are researching the issue</li>
<li>Bonus: learned about Tox and how to run individual tests</li>
</ul>
<p><strong> Attended the <a href="http://eavesdrop.openstack.org/meetings/designate/2014/designate.2014-05-28-17.05.log.html">team meeting</a></strong></p>
<ul>
<li>Learned that all projects share the same openstack-dev mailing list, and subscribed</li>
</ul>
<p><strong> Claim <a href="https://bugs.launchpad.net/designate/+bug/1282627">bug 1321873</a></strong></p>
<ul>
<li>Claimed https://bugs.launchpad.net/designate/+bug/1282627 as agreed last week.</li>
<li>designate-2013.2.tar.gz is the only tarball on Launchpad with a valid <span class="caps">AUTHORS</span> file</li>
<li>Wrote a script to run <code>python setup.py sdist</code> on all tags and check for <span class="caps">AUTHORS</span> content<ul>
<li>All locally generated tarballs have correct <span class="caps">AUTHORS</span> information</li>
</ul>
</li>
<li>Will next attempt to run the jobs in a simulated Jenkins environment (see next topic)</li>
</ul>
<p><strong> understand how Solum does devstack testing in Openstack’s <span class="caps">CI</span> system and apply that to Designate </strong></p>
<ul>
<li>Work in progress</li>
<li>Learned about Devstack Gate: http://ci.openstack.org/devstack-gate.html<ul>
<li>Found a broken link (devstack_launch_slave.pp) to correct (in openstack-infra/config)</li>
<li>Found a high level diagram at https://wiki.openstack.org/wiki/InfraTeam</li>
<li>Drawing diagrams of the processes of building devstack gate nodes (see Nodepool project)</li>
</ul>
</li>
<li>Following the “Simulating Devstack Gate Tests” instructions to test
Devstack+Designate tests locally before submitting change requests.</li>
</ul>
<p><strong> create a virtual machine definition that will allow users to try Designate </strong></p>
<ul>
<li>Work in progress</li>
<li>Once I started learning more about the Devstack Gate and Nodepool
projects, I wondered if I missed some existing tools that I should
take advantage of.</li>
</ul>
<p><strong> openstack-infra </strong></p>
<p>Much of the work I have planned for myself in the Designate project will overlap
with the openstack-infra team. To get familiar with their work, I have:</p>
<ul>
<li>Signed up for openstack-infra mailing list</li>
<li>Put <a href="https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting">InfraTeamMeeting</a> on my calendar</li>
</ul> Tim Freund2010-01-01T00:00:00Z2010-01-01T00:00:00Z/changelog/archive.html