Tomáš Tomečekhttps://blog.tomecek.net/index.xml
Recent content on Tomáš TomečekHugo -- gohugo.ioen-us[theme](https://github.com/digitalcraftsman/hugo-steam-theme) | [hugo](https://gohugo.io/) | [netlify](https://www.netlify.com/)Sun, 04 Feb 2018 11:12:00 +0100Building Container Images with Buildah and Ansiblehttps://blog.tomecek.net/post/building-containers-with-buildah-and-ansible/
Sun, 04 Feb 2018 11:12:00 +0100https://blog.tomecek.net/post/building-containers-with-buildah-and-ansible/<p>Do you use Ansible roles to provision your infrastructure? And would you like
to use those very same roles to create container images? You came to the right
place!</p>
<p>We are working on a project (and you problably heard of it already) called
<a href="https://github.com/ansible/ansible-container">Ansible Container</a>. It&rsquo;s not
just about creation of container images. It covers the complete workflow of
a containerized application. From build, local run, test to deploy.</p>
<p>In this blog post, I would like to show you how Ansible Container does those
builds — from an Ansible role to a container image.</p>
<p></p>
<h2 id="let-s-start">Let&rsquo;s start</h2>
<p>&hellip;with the Ansible role itself. If you are not familiar with the role concept,
look at <a href="http://docs.ansible.com/ansible/latest/playbooks_reuse_roles.html">the excellent Ansible
documentation</a>.</p>
<p>We will create a simple role which just installs nginx. Since I&rsquo;m most
comfortable with
<a href="https://download.fedoraproject.org/pub/fedora/linux/releases/27/Docker/x86_64/images/Fedora-Docker-Base-27-1.6.x86_64.tar.xz">Fedora</a>,
that&rsquo;s what we&rsquo;ll use. Feel free to use the base image which you are most
familiar with.</p>
<p>This is how it looks:</p>
<pre><code class="language-yaml">$ cat roles/sample-nginx/tasks/main.yml
- name: Install nginx
dnf:
name: nginx
state: installed
- name: Clean dnf metadata
command: dnf clean all
</code></pre>
<p>Simple and straightforward. Just install nginx package and clean package
manager metadata &ndash; we don&rsquo;t want those linger in the image. Just look at how
much useless data you may get. With metadata:</p>
<pre><code class="language-console">REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest fefdf36aa71b 14 seconds ago 441 MB
</code></pre>
<p>And without them:</p>
<pre><code class="language-console">REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest addb24556d33 23 seconds ago 268 MB
</code></pre>
<h2 id="can-we-create-the-container-now">Can we create the container now?</h2>
<p>Okay, we have the role, now we need to run it against a container. For that, we
need to write a simple playbook which will:</p>
<ol>
<li>Create the container.</li>
<li>Execute the role in the container.</li>
<li>Commit the container into a container image.</li>
</ol>
<p>Something like this should be sufficient:</p>
<pre><code>---
- hosts: localhost
connection: local
vars:
image: fedora:27
container_name: build_container
image_name: nginx
tasks:
- name: Make the base image available locally
docker_image:
name: '{{ image }}'
- name: Create the container
docker_container:
image: '{{ image }}'
name: '{{ container_name }}'
command: sleep infinity
- name: Add the newly created container to the inventory
add_host:
hostname: '{{ container_name }}'
ansible_connection: docker
ansible_python_interpreter: /usr/bin/python3 # fedora container doesn't ship python2
- name: Run the role in the container
delegate_to: '{{ container_name }}'
include_role:
name: sample-nginx
- name: Commit the container
command: docker commit \
-c 'CMD [&quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot;]' \
{{ container_name }} {{ image_name }}
- name: Remove the container
docker_container:
name: '{{ container_name }}'
state: absent
</code></pre>
<p>So, what&rsquo;s happening here?</p>
<ol>
<li>We first pull the base container image.</li>
<li>Then we create a container out of it. The important part is <code>sleep
infinity</code> &ndash; the container needs to be running while we execute the role in
it.</li>
<li>Once the container is running, we need to add it to Ansible&rsquo;s inventory. We
are also setting that host (the container) to be available via docker
connection plugin.</li>
<li>We are ready to run the role! The snippet is actually taken from
<a href="http://docs.ansible.com/ansible/latest/intro_inventory.html#non-ssh-connection-types">Ansible
documentation</a>.</li>
<li>Our container is provisioned, we can commit, thus making a container image.</li>
<li>And finally, let&rsquo;s remove the container, we don&rsquo;t need it anymore.</li>
</ol>
<h2 id="all-the-files-together">All the files together</h2>
<p>I put all the files inside a git repository so you don&rsquo;t have to copy-paste
them:
<a href="https://github.com/TomasTomecek/ansible-nginx-container">TomasTomecek/ansible-nginx-container</a>.</p>
<p>The repo looks like this:</p>
<pre><code>.
├── ansible.cfg
├── inventory
├── provision-container.yml
└── roles
└── sample-nginx
└── tasks
└── main.yml
</code></pre>
<p>Let&rsquo;s run the thing:</p>
<pre><code class="language-console">$ ansible-playbook provision-container.yml
PLAY [localhost] **********************************************************************
TASK [Gathering Facts] ****************************************************************
ok: [localhost]
TASK [Make the base image available locally] ******************************************
ok: [localhost]
TASK [Create the container] ***********************************************************
changed: [localhost]
TASK [Add the newly created container to the inventory] *******************************
changed: [localhost]
TASK [Run the role in the container] **************************************************
TASK [sample-nginx : Install nginx] ***************************************************
changed: [localhost -&gt; build_container]
TASK [sample-nginx : Clean dnf metadata] **********************************************
[WARNING]: Consider using dnf module rather than running dnf
changed: [localhost -&gt; build_container]
TASK [commit the container] ***********************************************************
changed: [localhost]
TASK [remove the container] ***********************************************************
changed: [localhost]
PLAY RECAP ****************************************************************************
localhost : ok=8 changed=6 unreachable=0 failed=0
</code></pre>
<pre><code class="language-console">$ docker images nginx
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest addb24556d33 23 seconds ago 268 MB
</code></pre>
<p>Does it actually work?</p>
<pre><code>$ docker run -d nginx
</code></pre>
<pre><code>$ curl -s 172.17.0.2 | grep title
&lt;title&gt;Test Page for the Nginx HTTP Server on Fedora&lt;/title&gt;
</code></pre>
<p>Yep, it does.</p>
<p>That was pretty mindblowing, right? But we can still do better.</p>
<h2 id="now-without-daemons">Now without daemons</h2>
<p>The main problem I have with the proposed solution is that we need a pretty big
daemon to be able to create a container image. The truth is that I don&rsquo;t want
such daemons. Luckily, we can use
<a href="https://www.projectatomic.io/blog/2017/06/introducing-buildah/">buildah</a> — a
simple CLI tool purposed to create container images.</p>
<p>We don&rsquo;t need to do any changes in our role. Unfortunately the playbook needs
to be changed a lot. So let&rsquo;s add support for buildah to it!</p>
<pre><code>---
- hosts: localhost
connection: local
vars:
image: fedora:27
container_name: build_container
image_name: nginx
container_engine: buildah # or docker
tasks:
- name: Obtain base image and create a container out of it
command: 'buildah from --name {{ container_name }} docker://{{ image }}'
become: true
when: container_engine == 'buildah'
- block:
- name: Make the base image available locally
docker_image:
name: '{{ image }}'
- name: Create the container
docker_container:
image: '{{ image }}'
name: '{{ container_name }}'
command: sleep infinity
when: container_engine == 'docker'
- name: Add the newly created container to the inventory
add_host:
hostname: '{{ container_name }}'
ansible_connection: '{{ container_engine }}'
ansible_python_interpreter: /usr/bin/python3 # fedora container doesn't ship python2
- name: Run the role in the container
delegate_to: '{{ container_name }}'
include_role:
name: sample-nginx
- block:
- name: Change default command of the container image
command: 'buildah config --cmd &quot;nginx -g \&quot;daemon off;\&quot;&quot; {{ container_name }}'
- name: Commit the container and make it an image
command: 'buildah commit --rm {{ container_name }} docker-daemon:{{ image_name }}:latest'
when: container_engine == 'buildah'
- block:
- name: Commit the container and make it an image
command: docker commit \
-c 'CMD [&quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot;]' \
{{ container_name }} {{ image_name }}
- name: Remove the container
docker_container:
name: '{{ container_name }}'
state: absent
when: container_engine == 'docker'
</code></pre>
<p>What we did?</p>
<ul>
<li>We kept the existing code and just wrapped docker-specific tasks with <code>when:
container_engine == 'docker'</code>.</li>
<li>We added more tasks specific to <code>buildah</code>.</li>
<li>Two tasks needed almost no changes: role execution and inventory update.</li>
</ul>
<p>Let&rsquo;s get briefly through the additions:</p>
<ul>
<li>Command <code>buildah from</code> fetches an image if it&rsquo;s not present locally and
creates a container out of it. Two in one.</li>
<li><code>buildah</code> has a dedicated command, <code>config</code>, to change container image
metadata.</li>
<li>And finally we just commit the container. It&rsquo;s pretty awesome that you can
put the image inside local docker daemon.</li>
</ul>
<p>Let&rsquo;s build using buildah:</p>
<pre><code>$ ansible-playbook provision-container.yml
PLAY [localhost] **********************************************************************
TASK [Gathering Facts] ****************************************************************
ok: [localhost]
TASK [Obtain base image and create a container out of it] *****************************
changed: [localhost]
TASK [Make the base image available locally] ******************************************
skipping: [localhost]
TASK [Create the container] ***********************************************************
skipping: [localhost]
TASK [Add the newly created container to the inventory] *******************************
changed: [localhost]
TASK [Run the role in the container] **************************************************
TASK [sample-nginx : Install nginx] ***************************************************
fatal: [localhost]: UNREACHABLE! =&gt; {&quot;changed&quot;: false, &quot;msg&quot;: &quot;Authentication or permission failure. In some cases, you
may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote
temp path in ansible.cfg to a path rooted in \&quot;/tmp\&quot;. Failed command was: ( umask 77 &amp;&amp; mkdir -p \&quot;` echo
~/.ansible/tmp/ansible-tmp-1517739453.02-84600074672209 `\&quot; &amp;&amp; echo ansible-tmp-1517739453.02-84600074672209=\&quot;` echo
~/.ansible/tmp/ansible-tmp-1517739453.02-84600074672209 `\&quot; ), exited with result 1&quot;, &quot;unreachable&quot;: true}
PLAY RECAP ****************************************************************************
localhost : ok=3 changed=2 unreachable=1 failed=0
</code></pre>
<p>Whoops! Something&rsquo;s not quite right. When this happens, I advise you to run with <code>-vvvv</code>:</p>
<pre><code class="language-console">TASK [sample-nginx : Install nginx] ***************************************************
task path: /home/tt/g/the-real-blog/nginx-container/roles/sample-nginx/tasks/main.yml:1
Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/dnf.py
&lt;build_container&gt; RUN ['buildah', 'mount', '--', 'build_container']
&lt;build_container&gt; RUN ['buildah', 'run', '--', 'build_container', '/bin/sh', '-c', 'echo ~ &amp;&amp; sleep 0']
&lt;build_container&gt; RUN ['buildah', 'run', '--', 'build_container', '/bin/sh', '-c', '( umask 77 &amp;&amp; mkdir -p &quot;` echo
~/.ansible/tmp/ansible-tmp-1517739667.49-225002665068293 `&quot; &amp;&amp; echo ansible-tmp-1517739667.49-225002665068293=&quot;` echo
~/.ansible/tmp/ansible-tmp-1517739667.49-225002665068293 `&quot; ) &amp;&amp; sleep 0']
&lt;build_container&gt; RUN ['buildah', 'umount', '--', 'build_container']
fatal: [localhost]: UNREACHABLE! =&gt; {
&quot;changed&quot;: false,
&quot;msg&quot;: &quot;Authentication or permission failure. In some cases, you may have been able to authenticate and did not have
permissions on the target directory. Consider changing the remote temp path in ansible.cfg to a path rooted in
\&quot;/tmp\&quot;. Failed command was: ( umask 77 &amp;&amp; mkdir -p \&quot;` echo
~/.ansible/tmp/ansible-tmp-1517739667.49-225002665068293 `\&quot; &amp;&amp; echo ansible-tmp-1517739667.49-225002665068293=\&quot;`
echo ~/.ansible/tmp/ansible-tmp-1517739667.49-225002665068293 `\&quot; ), exited with result 1, stderr output:
time=\&quot;2018-02-04T11:21:07+01:00\&quot; level=error msg=\&quot;mkdir /var/lib/containers/storage/mounts: permission
denied\nmkdir /var/lib/containers/storage/mounts: permission denied\&quot; \n&quot;,
&quot;unreachable&quot;: true
}
</code></pre>
<p>That&rsquo;s much more informative, the important part being:</p>
<pre><code>stderr output: time=\&quot;2018-02-04T11:21:07+01:00\&quot; level=error msg=\&quot;mkdir /var/lib/containers/storage/mounts: permission
denied\nmkdir /var/lib/containers/storage/mounts: permission denied\&quot; \n&quot;
</code></pre>
<p>What&rsquo;s happening here is that <code>ansible-playbook</code> is invoking buildah to run a
command inside the build container. Buildah needs to access
<code>/var/lib/containers/storage</code> and doesn&rsquo;t have the right permissions when
invoked with your unprivileged user:</p>
<pre><code class="language-console">$ ll -d /var/lib/containers/storage
drwx------. 8 root root 4.0K Nov 13 14:05 /var/lib/containers/storage
</code></pre>
<p>Unfortunately the original error message is not quite helpful. The solution here is simple — <code>sudo</code>:</p>
<pre><code class="language-console">$ sudo ansible-playbook provision-container.yml
PLAY [localhost] ***********************************************************************
TASK [Gathering Facts] *****************************************************************
ok: [localhost]
TASK [Obtain base image and create a container out of it] ******************************
changed: [localhost]
TASK [Make the base image available locally] *******************************************
skipping: [localhost]
TASK [Create the container] ************************************************************
skipping: [localhost]
TASK [add the newly created container to the inventory] ********************************
changed: [localhost]
TASK [run the role in the container] ***************************************************
TASK [sample-nginx : install nginx] ****************************************************
changed: [localhost -&gt; build_container]
TASK [sample-nginx : clean dnf metadata] ***********************************************
[WARNING]: Consider using dnf module rather than running dnf
changed: [localhost -&gt; build_container]
TASK [Change default command of the container image] ***********************************
changed: [localhost]
TASK [Commit the container and make it an image] ***************************************
changed: [localhost]
TASK [Commit the container and make it an image] ***************************************
skipping: [localhost]
TASK [remove the container] ************************************************************
skipping: [localhost]
PLAY RECAP *****************************************************************************
localhost : ok=7 changed=6 unreachable=0 failed=0
</code></pre>
<p>That worked just fine. Let&rsquo;s see if we have the container image in dockerd:</p>
<pre><code class="language-console">$ docker images docker.io/nginx
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/nginx latest 8f1aaab79770 25 seconds ago 268 MB
</code></pre>
<p>Looks okay. Does it work?</p>
<pre><code class="language-console">$ docker run -d docker.io/nginx
3165ec03253bae24951d20ab7a4a3905f824b67304eca16ae0ce9ca01504c411
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3165ec03253b docker.io/nginx &quot;nginx -g 'daemon ...&quot; 2 seconds ago Up Less than a second kind_mayer
$ curl -s 172.17.0.2 | grep title
&lt;title&gt;Test Page for the Nginx HTTP Server on Fedora&lt;/title&gt;
</code></pre>
<p>Sweet!</p>
<h2 id="conclussion">Conclussion</h2>
<p>We created a container image using an Ansible role without any daemons. Pretty
awesome, right?!</p>
<p>If you don&rsquo;t like the long playbook we had to create to execute this, I advise
you to check out <a href="https://github.com/ansible/ansible-container">Ansible
Container</a> — it contains the
logic of that playbook (and much more): all you need to provide is just the
container metadata and them roles. <a href="https://github.com/ansible/ansible-container/pull/790">We are still working on integrating buildah
in it</a>.</p>
<p>It&rsquo;s likely that you may need to tinker with your roles a bit to make them work
in containers. The same will apply for roles from <a href="https://galaxy.ansible.com/">Ansible
Galaxy</a>. While working on this blog post, I tried
several popular nginx Ansible roles from Ansible Galaxy and got to be honest,
none of them worked in container environment out of the box.</p>
<p>And finally, I can&rsquo;t wait to start running my containers with
<a href="https://github.com/projectatomic/libpod/blob/master/docs/podman.1.md">podman</a>.</p>Building container image with modular OS from scratchhttps://blog.tomecek.net/post/creating-modular-container-image/
Wed, 13 Sep 2017 16:27:47 +0200https://blog.tomecek.net/post/creating-modular-container-image/<p>We were sitting at &ldquo;Modularity UX feedback&rdquo; session at Flock 2017. Sinny Kumari
raised an interesting question: &ldquo;Can I create a container image with modular OS
locally myself?&ldquo;. Sinny wanted to try the modular OS on different CPU
architectures.</p>
<p>The container image can be created using Image Factory, which can be really
tough to set up.</p>
<p></p>
<p>I&rsquo;m so glad that platform team already solved this problem during development
of Boltron in their GitHub repo
<a href="https://github.com/fedora-modularity/baseruntime-docker-tests">fedora-modularity/baseruntime-docker-tests</a>.</p>
<p>They created a neat way of creating a docker base image from scratch using mock.</p>
<p>In order to do this, you should follow the instructions from
<a href="https://github.com/fedora-modularity/baseruntime-docker-tests#package-setup">README</a>
of the repo. Before running <code>avocado run setup.py</code>, we need to change configuration
of mock because by default it uses Boltron (F26) repos and targets x86_64.</p>
<p>I did this on my Raspberry Pi. The configuration is present in
<a href="https://github.com/fedora-modularity/baseruntime-docker-tests/blob/master/resources/base-runtime-mock.cfg">resources/base-runtime-mock.cfg</a>:</p>
<ol>
<li><p>I targeted arm CPU architecture:</p>
<pre><code class="language-diff">-config_opts['target_arch'] = 'x86_64'
-config_opts['legal_host_arches'] = ('x86_64',)
+config_opts['target_arch'] = 'armhfp'
+config_opts['legal_host_arches'] = ('armhfp', 'armv7l' )
</code></pre></li>
<li><p>I used Modular Rawhide compose repo:</p>
<pre><code class="language-diff">-baseurl=https://kojipkgs.fedoraproject.org/compose/latest-Fedora-Modular-26/compose/Server/x86_64/os/
+baseurl=https://kojipkgs.fedoraproject.org/compose/latest-Fedora-Modular-Rawhide/compose/Server/armhfp/os/
</code></pre></li>
</ol>
<p>Modular dnf is still not available in mainline. Martin is providing the RPM via
his COPR repo
<a href="https://copr.fedorainfracloud.org/coprs/mhatina/DNF-Modules/">mhatina/DNF-Modules</a>.
The important note is that the modular DNF is only available for architectures
<code>x86_64</code>, <code>i386</code> and <code>ppc64le</code>.</p>
<p>Let&rsquo;s build the image!</p>
<pre><code>$ sudo python2 ./setup.py
...
Successfully built dac3c6598bef
command 'docker rmi base-runtime-smoke-scratch' succeeded with output:
Untagged: base-runtime-smoke-scratch:latest
PASS 1-./setup.py:BaseRuntimeSetupDocker.testCreateDockerImage
Test results available in /tmp/avocado_avocado.core.jobwv6R1t
</code></pre>
<p>Once the build is done, we can try it out:</p>
<pre><code>$ sudo docker run --rm -ti base-runtime-smoke-scratch:latest bash
bash-4.4# cat /etc/system-release
Fedora modular release 27 (Twenty Seven)
bash-4.4# uname -i
armv7l
</code></pre>
<p>Once the patches for modular DNF are in mainline, this will be a lot more interesting!</p>Flock 2017https://blog.tomecek.net/post/flock-2017/
Fri, 08 Sep 2017 12:10:24 +0200https://blog.tomecek.net/post/flock-2017/<p>Flock is over. It was nice, good conversations, useful talks and workshops, it
was awesome to see everyone once again. And I liked the location.</p>
<p>The theme of this year&rsquo;s Flock was &ldquo;do-sessions&rdquo;. It means, less talks and more
workshops, hackfests and discussions. I liked that I could try things and be
part of the discussions, but at the same time, I missed big talks. Also some
talks with similar topics were scheduled at the same time, so one had to make
tough choices.</p>
<p></p>
<p>One of the most visible things one could see during Flock were comics
portraits! These were drawn by a professional artist. Everyone was running
around with a portrait of their own, I loved this! The booth was sponsored by
RHEL product marketing team.</p>
<p><img src="https://blog.tomecek.net/img/comics_portrait.jpg" alt="How cool is that?!" /></p>
<p>Here are some of my assorted notes from talks and workshops I attended.</p>
<h2 id="keynote">Keynote</h2>
<p>Matt was running his favorite graphs and current state of the art. They were
looking pretty good!</p>
<ul>
<li>Fedora 25 and 26 received good feedback and press talked about those releases
positively.</li>
<li>Looks like that Fedora is getting more and more installs (26 is bigger than
25, 25 is bigger than 24).</li>
<li>Matt introduced well-known projects/initiatives, such as Project Atomic, CI
initiative, Modularity.
<ul>
<li>Ambassadors should focus on these:</li>
<li><a href="https://communityblog.fedoraproject.org/ambassadors-fedora-strategy/">https://communityblog.fedoraproject.org/ambassadors-fedora-strategy/</a></li>
</ul></li>
<li>Graph of contributors is pretty much consistent for ~3 years.</li>
</ul>
<h2 id="factory-2">Factory 2</h2>
<p>Mike Bonnet talked about vision of Factory 2 team and introduced basic concepts
of the new tools the team is working on. Most of the tools got a standalone
presentation on their own by their core contributor.</p>
<p>The tools include:</p>
<ul>
<li>Module Build Service</li>
<li>Freshmaker</li>
<li>Greenwave</li>
<li>WaiverDB</li>
<li>ResultsDB</li>
<li>Arbitrary Branching</li>
</ul>
<p>The interesting point from Mike&rsquo;s talk was when he tried to present motivation
for all the automation the team is working on:</p>
<ul>
<li>There is more than 20.000.000 tasks in koji.</li>
<li>Latest Fedora release contains 54.000 binary RPMs.
<ul>
<li>Which is 215 RPMs per active maintainer.</li>
</ul></li>
</ul>
<p>I hope it&rsquo;s obvious that humans cannot manage this. Especially with
introduction of modules. Hence all the new tools which will make our lives
easier.</p>
<h2 id="freshmaker">Freshmaker</h2>
<ul>
<li>Contains policies to rebuild artifacts.</li>
<li>And initiates rebuilds of artifacts.</li>
<li>There are various events one may be interested to rebuild an artifact.
<ul>
<li>E.g. for container images:</li>
<li>When RPM hits stable (slower workflow).</li>
<li>When RPM (or module) passes tests (faster workflow).</li>
</ul></li>
</ul>
<p><img src="https://blog.tomecek.net/img/jkaluza_freshmaker.jpg" alt="Jan talking about freshmaker" /></p>
<h2 id="greenwave-waiverdb">Greenwave &amp; WaiverDB</h2>
<ul>
<li>A service for making decisions &ndash; is this artifact good?</li>
<li>Gating points at certain places, e.g. a test of artifact has finished.</li>
<li>Gating is based on policies.</li>
<li><code>dist.*</code> checks, which you may find in bodhi, are Fedora policies.</li>
<li>WaiverDB is a database of waivers against ResultsDB.</li>
</ul>
<h2 id="arbitrary-branching">Arbitrary Branching</h2>
<ul>
<li>What happens when a module goes EOL?
<ul>
<li>Should there be a dnf plugin to inform the user?</li>
</ul></li>
<li>Arbitrary branches can be used <strong>only with</strong> modules.
<ul>
<li>The maintainer is able to release the module via Bodhi.</li>
</ul></li>
</ul>
<h2 id="multi-arch-container-image-build-system">Multi-arch container image build system</h2>
<p>Adam Miller talked about the architecture of this system:</p>
<ul>
<li>There will be one OpenShift cluster per CPU architecture.</li>
<li>There will be one additional OpenShift cluster which will orchestrate the builds.</li>
</ul>
<p>There was a question from audience why not label nodes and use selectors.
Response from Adam was, that upstream team tried to go for this solution, but
realized it would be too hard to implement so they chose the path of multiple
clusters.</p>
<h2 id="module-build-service">Module build service</h2>
<p>Ralph had a nice and short presentation about MBS. The most important part for me were the future plans. Here they are:</p>
<ul>
<li><strong>Build-time filtering</strong>: discard packages (built locally?) which are
specified in <code>filter</code> section of modulemd.</li>
<li><strong>Transitive dependencies</strong>: MBS should enable modules, which are defined as
runtime dependencies of modules listed in buildrequires — the workaround for
now is to put those modules in buildrequires.</li>
<li><strong>Smarter component re-use</strong>: less rebuilds!</li>
<li><strong>The context value</strong> (of modulemd): this one is related to the next point —
context value should distinguish builds which are performed against different
platforms.</li>
<li><strong>Stream expansion</strong>: building a module against multiple platforms.</li>
</ul>
<p>Here are <a href="http://threebean.org/presentations/mbs-flock17/">Ralph&rsquo;s slides</a>.</p>
<h2 id="future-of-modularity">Future of Modularity</h2>
<p>Same as MBS talk: the most significant slide was — what&rsquo;s coming?</p>
<ul>
<li><strong>Small human edit — generated outputs</strong> — I don&rsquo;t recall this one, but I
hope it could be related to generating modulemds and let humans just polish
the final recipe.</li>
<li><strong>Generate from SRPMs</strong> — is it possible to generate a modulemd from an SRPM?</li>
<li><strong>Tool to see the whole ecosystem</strong> — in which module lives package X?</li>
<li><strong>Distribution-wide tests</strong> — can this modular distribution be installed?</li>
<li><strong>COPR &amp; Varant</strong> to be used for local development and scratch builds.</li>
</ul>
<p>You can find Langdon&rsquo;s slides over <a href="https://www.slideshare.net/langdonwhite/modularity-the-future-building-packaging">here</a>.</p>
<h2 id="modularity-ux-feedback">Modularity UX feedback</h2>
<p>This session was up every day. It was interesting to see community members to
go though the workflow of modular dnf. They asked some very good questions and
found several issues (including me!). I think Martin Hatina received a very
solid feedback which he&rsquo;ll be working on next weeks. The main outcome for me
was, when <a href="https://twitter.com/ksinny">Sinny Kumari</a> asked about a container
image of boltron for different CPU architectures. Obviously we didn&rsquo;t provide
anything like this. So I started working on a guide how to do this locally.
Expect another blog post!</p>
<h2 id="let-s-build-a-module">Let&rsquo;s build a module</h2>
<p>This was my workshop. Overall, I received good feedback. Unfortunately we
didn&rsquo;t have enough time to build some real modules, nor was Internet connection
stable enough. The most interesting outcome for me was that people could see
what it takes to create a new module and what the steps are to build it
locally. Clearly, this was confusing to some, since the process of defining
modulemd is not completely straight-forward. There was a lot questions and
fruitful discussion. But in the end, everyone seemed to be on the same page.
Which is awesome. The interesting part is that prior to my session, we were
having pretty advanced discussion about what it would take to modularize the
whole distribution. And after that &ldquo;complex&rdquo; discussion, we&rsquo;ve got back &ldquo;to the
roots&rdquo; of creating roots. It was great to teach everyone the essentials of
creating modules so people now understand what modules really are and how to
make them.</p>Ansible Container usagehttps://blog.tomecek.net/post/ansible-container-usage/
Mon, 03 Jul 2017 15:31:49 +0200https://blog.tomecek.net/post/ansible-container-usage/<p><a href="https://twitter.com/gregdek">Greg</a> told me at AnsibleFest that we don&rsquo;t know how many users Ansible Container has. PyPI no longer directly provides information about downloads. Except&hellip; I recently stumbled upon <a href="https://relativity.fi/blog/analyzing-pypi-download-statistics-from-zero-to-half-a-million-downloads-in-9-months">this blog post</a> which talks about getting download stats for PyPI packages using Google BigQuery. So let&rsquo;s do that!</p>
<p></p>
<p>We need to execute a BigQuery SQL-like query. Let&rsquo;s do one <a href="https://relativity.fi/blog/analyzing-pypi-download-statistics-from-zero-to-half-a-million-downloads-in-9-months/">from the blog post mentioned above</a> which shows daily downloads. Here&rsquo;s how you can run the query yourself:</p>
<ol>
<li>Just open the <a href="https://bigquery.cloud.google.com/">Google BigQuery service</a>.</li>
<li><p>Enter this query:</p>
<pre><code class="language-sql">SELECT
STRFTIME_UTC_USEC(timestamp, &quot;%Y-%m-%d&quot;) AS yyyymmdd,
COUNT(*) as total_downloads,
FROM
TABLE_DATE_RANGE(
[the-psf:pypi.downloads],
DATE_ADD(CURRENT_TIMESTAMP(), -300, &quot;day&quot;),
DATE_ADD(CURRENT_TIMESTAMP(), -1, &quot;day&quot;)
)
WHERE
file.project = 'ansible-container'
GROUP BY
yyyymmdd
ORDER BY
yyyymmdd DESC
</code></pre></li>
<li><p>And just run it.</p></li>
</ol>
<p>You can even export it into spreadsheets afterwards. I did that and annotated the chart a bit:</p>
<p><img src="https://blog.tomecek.net/img/ansible-container-usage.png" alt="Ansible Container Usage" /></p>
<p>As you can see, the new release usually creates a spike. On the other hand, there is a bunch of other spikes, I assume usually caused by someone talking or writing about Ansible Container.</p>
<p>Overall, more and more people seem to be using Ansible Container. Can&rsquo;t wait for <code>1.0</code>.</p>
<p>Here&rsquo;s some more links related to download stats:</p>
<ul>
<li><a href="https://langui.sh/2016/12/09/data-driven-decisions/">A bunch of helpful queries.</a></li>
<li><a href="http://moderndata.plot.ly/analyzing-plotlys-python-package-downloads/">Using plotly to create charts from the query.</a></li>
<li><a href="https://github.com/ofek/pypinfo">CLI tool to view download stats.</a></li>
</ul>My GitHub pull request workflowhttps://blog.tomecek.net/post/my-github-pull-request-workflow/
Wed, 14 Jun 2017 09:30:25 +0200https://blog.tomecek.net/post/my-github-pull-request-workflow/<p>My colleague recently asked me how to correctly handle pull requests. Here&rsquo;s
how I&rsquo;m doing it.</p>
<p>Everything starts with forking a repository so you can push your changes to
your personal fork and then submit them as a pull request. So head over to the
GitHub repository and hit the Fork button.</p>
<p></p>
<p>Once the repository is forked, you need to clone it:</p>
<pre><code class="language-shell">$ git clone git@github.com:$GITHUB_HANDLE/$PROJECT.git
$ cd $PROJECT
</code></pre>
<p>You should also add a remote pointing to the upstream repo so you can update your fork:</p>
<pre><code class="language-shell">$ git remote add upstream https://github.com/$UPSTREAM/$PROJECT.git
</code></pre>
<p>I usually set it up via <code>https://</code> so I don&rsquo;t accidentally push to upstream when I have permissions.</p>
<p>We can finally start working on our changes. So let&rsquo;s create a feature branch:</p>
<pre><code class="language-shell">$ git checkout -b the-best-feature
</code></pre>
<p>Now we will do the changes and commit them. Then we push&hellip;</p>
<pre><code class="language-shell">$ git push -u
</code></pre>
<p>(This command will push to origin, our fork, and will start tracking the remote branch.)</p>
<p>Back to browser and let&rsquo;s open the pull request.</p>
<h2 id="we-need-to-update-our-pull-request">We need to update our pull request</h2>
<p>Upstream maintainers reviewed your pull request and are requesting changes. At
the same time, it&rsquo;s likely that master branch of upstream repository got
updated and we need to pull the changes. This is how you do that:</p>
<pre><code>$ git checkout the-best-feature
$ git pull --rebase upstream master
</code></pre>
<p>In case there are conflicts, just resolve them and do:</p>
<pre><code>$ git add -u
$ git rebase --continue
</code></pre>
<p>Once the rebase is done, we should update the pull request with the requested
changes. I usually amend existing commits, like this:</p>
<pre><code>$ git commit
$ git rebase -i HEAD~3
</code></pre>
<p>The last command will start interactive rebase &ndash; that&rsquo;s the place where you
can reorder, change, squash commits &ndash; very useful. You should also rebase only
commits you are proposing. Rebasing commits present in upstream will usually
make the pull request un-mergable.</p>
<p>In case you screwed badly, it&rsquo;s easy to recover. Just reset your branch to the
state of upstream master and start cherry-picking commits from the feature branch:</p>
<pre><code class="language-shell">$ git branch checkpoint-of-the-best-feature
$ git reset --hard upstream/master
$ git cherry-pick ${MY_OLDEST_COMMIT_FROM_THE_CHECKPOINT_BRANCH}
</code></pre>
<p>As soon as you are okay with the changes, you need to force-push (since you
changed the history):</p>
<pre><code>$ git push --force
</code></pre>
<p>And that&rsquo;s it!</p>
<h2 id="optimize">Optimize!</h2>
<p>As you can see, we used tons of commands, arguments, options. I suggest using aliases:</p>
<pre><code class="language-shell">alias g=&quot;git&quot;
alias gp=&quot;git push -u&quot;
alias gpf=&quot;git push -f&quot;
alias gc=&quot;git commit --verbose&quot;
alias gca=&quot;git commit --verbose --amend&quot;
alias gpum=&quot;git pull --rebase upstream master&quot;
alias gau=&quot;git add --verbose --update -- .&quot;
alias gr=&quot;git rebase&quot;
alias gri=&quot;git rebase -i&quot;
alias grc=&quot;git rebase --continue&quot;
alias gb=&quot;git checkout -b&quot;
alias grau=&quot;git remote add upstream&quot;
</code></pre>
<p>I even got so far that I created <a href="https://github.com/TomasTomecek/dotfiles/blob/master/bin/gh-fork">a script</a> to:</p>
<ol>
<li>Fork the repository via GitHub API.</li>
<li>Clone the repository.</li>
<li>Add upstream remote.</li>
</ol>Producing up-to-date container imageshttps://blog.tomecek.net/post/producing-up-to-date-images/
Thu, 01 Jun 2017 11:00:00 +0200https://blog.tomecek.net/post/producing-up-to-date-images/<p>Even though Docker Hub supports automated builds — triggering builds when you
push to a git repository, you still need to actually push to your git
repository in order to get the image build. That is pretty tedious, to just
update versions and run tests to verify it works. It would be much simpler to
let it update itself automatically and just resolve issues.</p>
<p></p>
<p>And that&rsquo;s actually pretty easy to do. All you need is GitHub repository,
Docker Hub repository and one script running as a Cron Job in Travis CI. And
that&rsquo;s it.</p>
<p>So let&rsquo;s take a closer look how <a href="https://github.com/TomasTomecek/rust-container/">I set this up for Rust</a>.</p>
<ol>
<li><p>Travis CI initiates a build every day using its <a href="https://github.com/travis-ci/beta-features/issues/1">Cron Jobs</a> feature.</p></li>
<li><p><a href="https://github.com/TomasTomecek/rust-container/blob/master/hack/ci.sh">This build script</a> is executed. It&rsquo;s as simple as:</p>
<pre><code>mkdir -p ~/.docker &amp;&amp; echo &quot;${DOCKER_AUTH}&quot; &gt;~/.docker/config.json
docker build --tag=$USER/rust .
make test
TESTS_PASSED=$?
if [[ $TESTS_PASSED == 0 ]] ; then
VERSION=$(docker run $USER/rust rustc --version)
docker tag $USER/rust tomastomecek/rust:$VERSION
docker push tomastomecek/rust:$VERSION
fi
</code></pre></li>
<li><p>So the image gets built, tested, tagged with correct version and then it&rsquo;s pushed to Docker Hub.</p></li>
<li><p>Tests verify that</p>
<ul>
<li>Rust compiler is able to compile Rust code.</li>
<li>Cargo is able to create a new project.</li>
<li>This cargo project can be built.</li>
</ul></li>
</ol>
<p>And that&rsquo;s pretty much it. With such a simple pipeline you get Docker images which are:</p>
<ul>
<li>up to date</li>
<li>functional</li>
<li>correctly tagged</li>
</ul>
<p>For implementation details, such as
<a href="https://github.com/TomasTomecek/rust-container/blob/master/Dockerfile">Dockerfile</a>,
<a href="https://github.com/TomasTomecek/rust-container/blob/master/Makefile">Makefile</a>,
<a href="https://github.com/TomasTomecek/rust-container/blob/master/.travis.yml">.travis.yml</a>,
<a href="https://github.com/TomasTomecek/rust-container/blob/master/tests/test_functional.py">tests</a>
or the precise
<a href="https://github.com/TomasTomecek/rust-container/blob/master/hack/ci.sh">cron job script</a>,
please check my <a href="https://github.com/TomasTomecek/rust-container">TomasTomecek/rust-container</a> GitHub repository.</p>Removing messages with notmuchhttps://blog.tomecek.net/post/removing-messages-with-notmuch/
Tue, 07 Feb 2017 13:39:17 +0100https://blog.tomecek.net/post/removing-messages-with-notmuch/<p><strong>Disclaimer: this is very likely not safe, use it at your own risk, I don&rsquo;t account for any harm.</strong></p>
<p>So you <a href="https://notmuchmail.org/special-tags/">can&rsquo;t remove messages</a> with notmuch:</p>
<pre><code class="language-text">While notmuch does not support, nor ever will, the deleting of messages...
</code></pre>
<p>That&rsquo;s a fact. But what if it could help you with that? A lot actually.</p>
<p></p>
<p>I reached my quota. And there was no way to remove messages in bulk with standard web interface (I wanted to get rid of ~30k messages). The filter I wanted to use was quite simple:</p>
<pre><code class="language-text">Everything older than one year from these folders.
</code></pre>
<p>This is where notmuch excells!</p>
<p>So I tagged the mail for deletion:</p>
<pre><code class="language-shell">$ notmuch tag +deleted date:..1.5year folder:chatty-mailing-list
</code></pre>
<p>So we know now what we want to delete. Let&rsquo;s proceed to <code>mbsync</code> part. And this
is where it gets messy and dangerous. I definitely advise to turn off any mail
synchronization. So there are no race conditions in-place.</p>
<p>First set <code>Expunge</code> to <code>Both</code> since you want to remove your remote mail.</p>
<p>Time for some <a href="https://cr.yp.to/proto/maildir.html">maildir flags magic</a>: if
you flag your mail with <code>T</code>, it means it&rsquo;s suppose to be trashed. Our mail
server actually removes it right away. So let&rsquo;s trash:</p>
<pre><code class="language-shell">$ for x in $(notmuch search --output=files tag:deleted) ; do mv $x ${x}T ; done
</code></pre>
<p>It seems simple, but it&rsquo;s definitely not safe b/c it is expected that message
file ends with <code>2,</code>, so please make sure the filenames look like this:</p>
<pre><code class="language-shell">/home/me/.mail/chatty-mailing-list/cur/1437473681.4267_139480.host,U=13261:2,
</code></pre>
<p>so you can trash them by turning them into</p>
<pre><code class="language-shell">/home/me/.mail/chatty-mailing-list/cur/1437473681.4267_139480.host,U=13261:2,T
</code></pre>
<p>Run <code>mbsync</code>, it will propagate the deletion and&hellip; That&rsquo;s it.</p>
<p>This has worked for me, I hope it will work for you. Just make sure what you&rsquo;re
doing before running any potentially destructive commands.</p>
<p>Is there a better way? I think so. Please suggest in comments.</p>Non-blocking stdin with python using epollhttps://blog.tomecek.net/post/non-blocking-stdin-in-python/
Thu, 20 Oct 2016 16:28:44 +0200https://blog.tomecek.net/post/non-blocking-stdin-in-python/<p>I was playing with <code>epoll</code> and was curious whether I can use it to monitor
<code>sys.stdin</code>. The biggest issue was that <code>sys.stdin.read()</code> is blocking and I
had no way to figure out whether I read the descriptor fully or not (making the
<code>epoll</code> useless pretty much). Until I changed it to non-blocking with <code>fcntl</code>.</p>
<p></p>
<pre><code class="language-python">import os
import sys
import fcntl
import select
fd = sys.stdin.fileno()
fl = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK)
epoll = select.epoll()
epoll.register(fd, select.EPOLLIN)
try:
while True:
events = epoll.poll(1)
for fileno, event in events:
data = &quot;&quot;
while True:
l = sys.stdin.read(64)
if not l:
break
data += l
print(data.upper(), end=&quot;&quot;)
finally:
epoll.unregister(fd)
epoll.close()
</code></pre>
<p>Sample usage:</p>
<pre><code class="language-shell">$ python3 cat.py
asdqwe
ASDQWE
zxcasd
ZXCASD
^CTraceback (most recent call last):
File &quot;cat.py&quot;, line 15, in &lt;module&gt;
events = epoll.poll(1)
KeyboardInterrupt
</code></pre>LinuxCon ContainerCon Europe 2016https://blog.tomecek.net/post/linuxcon-containercon-europe-2016/
Mon, 10 Oct 2016 16:28:44 +0200https://blog.tomecek.net/post/linuxcon-containercon-europe-2016/<p>Here are my assorted notes from some <code>${subject}</code> talks.</p>
<p></p>
<p>(Blame me, not speakers, for any untruths in these notes.)</p>
<h2 id="keynote-incremental-revolution-what-docker-learned-from-the-open-source-fire-hose-solomon-hykes-founder-cto-and-chief-product-officer-docker">Keynote: Incremental Revolution - What Docker Learned from the Open-Source Fire Hose - Solomon Hykes, Founder, CTO and Chief Product Officer, Docker</h2>
<ul>
<li>Incremental revolution</li>
<li>Tools of mass innovation</li>
<li>[Similar to a DockerCon 2015 keynote]</li>
<li>Programmable internet would be a tool of mass innovation</li>
<li>Docker is building a stack: standards → infra → dev platform → product</li>
<li>Docker is 250 people</li>
<li>With open source they get a lot of contributions</li>
<li>Borrowed some open source rules form Linux kernel</li>
<li>Linux started with plumbing, Docker started with the product
<ul>
<li>Plumbing along the way</li>
</ul></li>
<li>Docker is solving problems
<ul>
<li>e.g. Docker for Mac, Docker for AWS</li>
</ul></li>
<li>Demo
<ul>
<li>[showcases docker on Mac]</li>
<li>[showcases docker on AWS - clicky interface]</li>
</ul></li>
<li>Infrakit - new open source project
<ul>
<li>[was opensourced live]</li>
<li>[for more info see notes below]</li>
</ul></li>
</ul>
<h2 id="putting-the-parts-together-building-a-secure-container-platform-matthew-garrett-coreos">Putting the Parts Together: Building a Secure Container Platform - Matthew Garrett, CoreOS</h2>
<ul>
<li>Multilayer security: container → runtime → kernel → firmware
<ul>
<li>You need to secure lower layer b/c upper layer trusts the lower layer implicitly</li>
</ul></li>
<li>[Uses Fedora!!! A lot of CoreOS folks do]</li>
<li>UEFI Secure boot
<ul>
<li>First level of protection: signed bootloaded</li>
</ul></li>
<li>Signed kernel, baked initrd into kernel</li>
<li><a href="https://sourceforge.net/p/linux-ima/wiki/Home/">IMA</a>
<ul>
<li>Kernel has a list of files and their hashes and verifies that the file (executable) matches its hash</li>
<li>Makes sure that no one tampered your files</li>
<li>Hash is stored in extended attribute</li>
</ul></li>
<li><a href="https://wiki.gentoo.org/wiki/Extended_Verification_Module">EVM</a>
<ul>
<li>Verify that selected set of extended security attributes wasn&rsquo;t changed</li>
</ul></li>
<li><a href="https://source.android.com/security/verifiedboot/dm-verity.html">dm-verity</a> (by google for chrome os)
<ul>
<li>Hash tree which validates whole filesystem</li>
<li>The tree contains hashes of blocks, not the whole block device</li>
<li>Read only filesystem</li>
<li>Enabled in CoreOS, root hash is stored in signed kernel</li>
</ul></li>
<li>The above implies immutable base system</li>
<li>Where to store trusted keys required for signed container images?
<ul>
<li>Immutable kernel keyring</li>
<li>Populated during boot time and made immutable</li>
<li>In UEFI variables (are signed)</li>
</ul></li>
<li>Per container SELinux isolation</li>
<li>Clear containers: run production containers in lightweight VMs</li>
<li>Live introspection (theoretical research)
<ul>
<li>The future</li>
<li>Reduces performance significantly</li>
</ul></li>
</ul>
<h2 id="building-distributed-systems-without-docker-using-docker-plumbing-projects-patrick-chanezon-david-chung-docker-phil-estes-ibm">Building Distributed Systems without Docker, Using Docker Plumbing Projects - Patrick Chanezon &amp; David Chung, Docker &amp; Phil Estes, IBM</h2>
<ul>
<li>OCI / runc is getting addoption
<ul>
<li><a href="https://github.com/hyperhq/runv">runv</a> is hypervisor based runtime for <a href="https://www.hyper.sh/">Hyper.sh</a>, fully compatible with OCI</li>
<li><a href="https://www.cloudfoundry.org/garden-and-runc/">Garden</a>, runtime for Cloud Foundry, <a href="https://github.com/hyperhq/runv">will use runc</a> as a backend for linux</li>
<li>Intel has their <a href="https://github.com/01org/cc-oci-runtime">own implementation</a> of OCI runtime spec for clear containers</li>
</ul></li>
<li>containerd uses shim process to enable process reparenting
<ul>
<li>And thus enable live reload of containerd without restarting containers</li>
</ul></li>
<li><a href="https://github.com/docker/infrakit">InfraKit</a>
<ul>
<li>Newly introduced as a open source project to create and manage a declarative, self-healing infrastructure</li>
</ul></li>
<li>Declarative json config: images, groups, flavors, instance types, sizes&hellip;</li>
<li>Config is the input</li>
<li>Self healing
<ul>
<li>Monitoring infra state</li>
<li>Detect state divergence</li>
<li>Take actions</li>
</ul></li>
<li>No downtime, rolling update</li>
<li>Primitives, abstractions: create, scale, group, instance, &hellip;</li>
<li>Instance plugins for EC2, Azure, Vagrant, &hellip;</li>
</ul>
<h2 id="locking-down-your-systemd-services-lennart-poettering-red-hat">Locking Down Your Systemd Services - Lennart Poettering, Red Hat</h2>
<p>This talk was about sandboxing and security features of systemd (existing,
planned). Lennart presented a list of configuration options available for
services (unit files):</p>
<ul>
<li><code>DynamicUser</code> - transient user: created when service starts, removed once service is stoppped</li>
<li><code>CapabilityBoundingSet</code> - maximal set of ever available capabilities to the process tree - process will never ever be able to obtain capabilities which are not in this set</li>
<li><code>PrivateTmp</code> - <code>/tmp</code> is shared for all processes; this option provides private <code>/tmp</code> dir to the service as <code>/tmp</code>, it&rsquo;s removed when the service is stopped (on host this dir is available as a subdir of <code>/tmp</code>)</li>
<li><code>PrivateDevices</code> - no access to privileged devices (e.g. <code>/dev/sda</code>)</li>
<li><code>PrivateNetwork</code> - creates new network stack for the service and populates only <code>localhost</code> interface, rest of the network is unavailable</li>
<li><code>ProtectSystem</code> - set parts of the filesystem read only: <code>/boot</code>, <code>/usr</code>, <code>/etc</code> or even <code>/</code></li>
<li><code>PrivateUsers</code> - disconnected user databases, most of the users are mapped to <code>nobody</code> user; only root user and service&rsquo;s user are mapped correctly</li>
<li><code>ReadWritePaths</code> - list of paths which the service is expected to read and write into</li>
<li><code>ReadOnlyPaths</code> - list of paths which the service is expected to read only</li>
<li><code>InaccessiblePaths</code> - list of paths which the service should not be able to access</li>
</ul>
<p>Example:</p>
<pre><code>$ systemd-run -p InaccessiblePaths=/etc/passwd -p ReadOnlyPaths=/etc -p PrivateTmp=true -t bash
Running as unit: run-u1282.service
Press ^] three times within 1s to disconnect TTY.
</code></pre>
<p>We can&rsquo;t read <code>/etc/passwd</code>:</p>
<pre><code>bash-4.3# ls -lha /etc/passwd
---------- 1 0 root 0 Oct 4 14:41 /etc/passwd
bash-4.3# cat /etc/passwd
bash-4.3#
</code></pre>
<p>We can&rsquo;t write to <code>/etc/</code>:</p>
<pre><code>bash-4.3# echo &quot;asdqwe&quot; &gt;/etc/asdqwe
bash: /etc/asdqwe: Read-only file system
</code></pre>
<p>We have our own private <code>/tmp</code>:</p>
<pre><code>bash-4.3# ls -lha /tmp
total 4.0K
drwxrwxrwt 2 0 root 40 Oct 12 14:52 .
dr-xr-xr-x. 21 0 root 4.0K Jul 29 16:42 ..
</code></pre>Flock 2016: my noteshttps://blog.tomecek.net/post/flock-2016/
Fri, 12 Aug 2016 11:00:00 +0200https://blog.tomecek.net/post/flock-2016/<p>Last week we were at Flock 2016 which was held in Krakow, Poland. Awesome
event, lots of news, great people, plenty of conversations and fun.</p>
<p>Here is a list of my notes:</p>
<p></p>
<h2 id="keynote">Keynote</h2>
<ul>
<li>Fedora is growing</li>
<li><a href="http://flatpak.org/">Flatpak</a> is coming!</li>
<li>Fedora wants to focus on Atomic spin</li>
<li>Fedora is made by community (<sup>1</sup>&frasl;<sub>3</sub> is Red Hat employees)</li>
</ul>
<h2 id="layered-image-build-system-in-fedora">Layered Image Build System in Fedora</h2>
<ul>
<li>Eventually will be feeding hub.docker.com/fedora</li>
<li>They will be getting <a href="https://github.com/fedora-cloud/Fedora-Dockerfiles">Fedora-Dockerfiles</a> into dist-git</li>
<li>registry.fedoraprojrect.org is suppose to be behind mirror manager</li>
<li>Guidelines:
<ul>
<li><a href="https://fedoraproject.org/wiki/PackagingDrafts/Package_Review_Process_with_Containers">https://fedoraproject.org/wiki/PackagingDrafts/Package_Review_Process_with_Containers</a></li>
<li><a href="https://fedoraproject.org/wiki/PackagingDrafts/Containers">https://fedoraproject.org/wiki/PackagingDrafts/Containers</a></li>
</ul></li>
</ul>
<h2 id="getting-new-things-into-fedora">Getting new things into Fedora</h2>
<ul>
<li>Open</li>
<li>Reproducible</li>
<li>Auditable</li>
<li>tl;dr talk to people what you want to do, ideally as soon as possible</li>
</ul>
<h2 id="packaging-chromium">Packaging chromium</h2>
<ul>
<li>Huge project, frequent releases, tons of code and contributors</li>
<li>Uses <a href="https://chromium.googlesource.com/chromium/src/+/master/tools/gn/docs/quick_start.md">GN</a> as a build tool-chain</li>
<li>Downstream patches: rebasing is hard</li>
<li>Maintainers should upstream patches ASAP</li>
<li>Has team of maintainers in gentoo</li>
<li>Lot of bundled libs b/c they move too fast</li>
<li>Bundled dependencies are updated when the update is required, not when a new release is done</li>
</ul>
<h2 id="containers-in-production">Containers in production</h2>
<ul>
<li>It&rsquo;s all about building images</li>
<li>CoW issues: shared memory, overlay</li>
<li>Shared storage impossible to be done with CoW</li>
<li>CVEs</li>
<li>The team is planning to unpack docker images into an ostree repository</li>
<li>Simple image signing: one image can have multiple signatures</li>
<li>Remote image inspection: <a href="https://github.com/projectatomic/skopeo">skopeo</a> -&gt; <a href="https://github.com/containers/image">containers/image</a></li>
<li>CoW storage management tool: <a href="https://github.com/containers/storage">containers/storage</a></li>
<li>ocid - container management API
<ul>
<li>Implements k8s container runtime interface</li>
</ul></li>
</ul>
<h2 id="modularity">Modularity</h2>
<ul>
<li>Was super excited about this: Langdon delivered an awesome presentation</li>
<li>Workshop was also very active: everyone asked questions and tried to be part of the discussion</li>
<li>tl;dr modules are yum repos and they are built in a funky way in koji using custom tooling</li>
<li>It takes time to built even a simple module</li>
<li>AND IT WORKS!</li>
<li><a href="https://fedoraproject.org/wiki/Modularity">https://fedoraproject.org/wiki/Modularity</a></li>
</ul>Handling secrets when building docker images is easyhttps://blog.tomecek.net/post/docker-build-with-secrets/
Thu, 04 Aug 2016 10:36:39 +0200https://blog.tomecek.net/post/docker-build-with-secrets/<p>So you wanna build a docker image. And you need to fetch your application sources from git. Which is guarded by <code>ssh</code>. And you don&rsquo;t want the ssh key to get leaked into the final image. Bummer.</p>
<p>Unless&hellip;</p>
<p></p>
<p>This is the Dockerfile. As you can see, we clone with <code>ssh://</code>:</p>
<pre><code>FROM fedora
COPY id_rsa /root/.ssh/
RUN dnf install -y git python3-setuptools python3-urwid
RUN git clone ssh://github.com:TomasTomecek/sen &amp;&amp; \
cd sen &amp;&amp; \
python3 ./setup.py install &amp;&amp; \
rm -rf /root/.ssh/id_rsa # remove the key, we don't want to share with the world
CMD [&quot;sen&quot;]
</code></pre>
<p>Important line is:</p>
<pre><code>rm -rf /root/.ssh/id_rsa
</code></pre>
<p>as we we don&rsquo;t want to share the key with the world. (and we think this will work)</p>
<p>We can build now:</p>
<pre><code>$ docker build --tag=sen .
...
Successfully built 2256d1ba4421
</code></pre>
<p>Let&rsquo;s see if we can access the private key:</p>
<pre><code>$ mkdir image/
$ docker save sen | tar -x -C image/
$ cd image/
$ find . -name &quot;*.tar&quot; -exec tar -t -f {} \; | grep id_rsa
root/.ssh/id_rsa
</code></pre>
<p>Whoops! Our <strong>private</strong> key leaked! We need to fix this&hellip;</p>
<p>&hellip;by squashing layers!!</p>
<pre><code>$ docker-squash -f f9873d530588 -t squashed-sen sen
</code></pre>
<p>(use <code>docker history</code> to find out the layer id you want to squash from)</p>
<p>Let&rsquo;s see if the key is present in the squashed image:</p>
<pre><code>$ rm -rf ./image/*
$ find . -name &quot;*.tar&quot; -exec tar -t -f {} \; | \
grep id_rsa || \
echo &quot;You're safe&quot;
You're safe
</code></pre>
<p>This is how you can easily solve secrets when building docker images.</p>
<p>Here&rsquo;s <a href="https://github.com/goldmann/docker-squash">docker-squash</a>. Thanks Marek for writing the tool!</p>Download manifests from Docker Hubhttps://blog.tomecek.net/post/download-manifests-from-docker-hub/
Mon, 13 Jun 2016 12:20:46 +0200https://blog.tomecek.net/post/download-manifests-from-docker-hub/<p>So we needed to fetch manifests of repositories from Docker Hub today. It&rsquo;s not
that hard. 30 lines of <code>python</code> can do it. But at the same time, you need to read
docs with all the specs.</p>
<p></p>
<h3 id="authentication">Authentication</h3>
<p>The biggest pain. <code>pull</code> seems to be a privileged operation which requires
authentication. Luckily you only need to obtain a token:</p>
<pre><code>repo = &quot;library/fedora&quot;
login_template = &quot;https://auth.docker.io/token?service=registry.docker.io&amp;scope=repository:{repository}:pull&quot;
token = requests.get(login_template.format(repository=repo), json=True).json()[&quot;token&quot;]
</code></pre>
<p>This is documented nicely <a href="https://docs.docker.com/registry/spec/auth/token/">here</a>.</p>
<h3 id="api-call-for-getting-the-manifest">API call for getting the manifest</h3>
<p>That one is documented over <a href="https://docs.docker.com/registry/spec/api/#manifest">here</a>.</p>
<pre><code>GET /v2/{repository}/manifests/{tag}
</code></pre>
<p>Nothing really to talk about: just fetch manifest of requested repository.</p>
<pre><code>get_manifest_template = &quot;https://registry.hub.docker.com/v2/{repository}/manifests/{tag}&quot;
manifest = requests.get(
get_manifest_template.format(repository=repo, tag=tag),
headers={&quot;Authorization&quot;: &quot;Bearer {}&quot;.format(token)},
json=True
).json()
</code></pre>
<p>Pretty simple, right?</p>
<p>The whole script is available in <a href="https://github.com/TomasTomecek/download-manifest-from-dockerhub">this github repo</a>.</p>
<p>Happy hacking!</p>Simple way to check for race conditionshttps://blog.tomecek.net/post/test-race-conditions/
Tue, 17 May 2016 12:20:46 +0200https://blog.tomecek.net/post/test-race-conditions/<p>Today was a fun day. We were working on a piece of code which interacts with PostgreSQL database. One function was reading from database and based on the query result it inserted some data afterwards. The thing is that it wasn&rsquo;t done in a transaction so I suspected there could be a race condition. But how to test such case?</p>
<p>My requirements for such test were obvious: I wanna spam the server with streams of requests and check logs if the server is able to handle it. Pretty easy to do in shell:</p>
<pre><code>for y in `seq 1 8` ; \
do (for x in `seq 1 50` ; \
do curl -s url/api/v1/endpoint/ ; \
done)&amp; ; \
done
</code></pre>
<p>This simple shell script spawns 8 processes in parallel where each process performs 50 requests sequentially.</p>
<p>I hit the race condition immediately.</p>
<p>\o/</p>
Automatic mounts with systemdhttps://blog.tomecek.net/post/automount-with-systemd/
Mon, 18 Apr 2016 15:11:04 +0200https://blog.tomecek.net/post/automount-with-systemd/<p>So I wanted to setup automatic mounting (read as autofs) with systemd, <em>without</em> using <code>fstab</code>.</p>
<p>Unfortunately, the man page didn&rsquo;t have any examples so it wasn&rsquo;t that easy to figure out. Luckily there is an excellent guide at RHCSA course [1].</p>
<p>Tl;dr</p>
<p></p>
<h2 id="you-need-two-files">You need two files</h2>
<ol>
<li>First one to setup the mount itself.</li>
<li>Second to perform automatic mounting.</li>
</ol>
<p>It&rsquo;s not that easy to name the file for automatic mounting. Quote from manpage:</p>
<pre><code>Automount units must be named after the automount directories they control.
Example: the automount point /home/lennart must be configured in a unit file
home-lennart.automount. For details about the escaping logic used to convert
a file system path to a unit name see systemd.unit(5).
</code></pre>
<h2 id="content-of-the-files">Content of the files</h2>
<pre><code>$ cat /etc/systemd/system/mnt-scratch.automount
[Unit]
Description=Automount Scratch
[Automount]
Where=/mnt/scratch
[Install]
WantedBy=multi-user.target
$ cat /etc/systemd/system/mnt-scratch.mount
[Unit]
Description=Scratch
[Mount]
What=nfs.example.com:/export/scratch
Where=/mnt/scratch
Type=nfs
[Install]
WantedBy=multi-user.target
</code></pre>
<p>Now to notify systemd there are some new files available:</p>
<pre><code>$ systemctl daemon-reload
</code></pre>
<h2 id="runtime-setup">Runtime setup</h2>
<p>Feel free to disable the <code>mount</code>, but <code>enable</code> the <code>automount</code>:</p>
<pre><code> $ systemctl is-enabled mnt-scratch.mount
disabled
$ systemctl is-enabled mnt-scratch.automount 1 ↵
enabled
$ systemctl start mnt-scratch.automount 1 ↵
$ ls /mnt/scratch &gt;/dev/null
$ systemctl status mnt-scratch.automount
● mnt-scratch.automount - Automount Scratch
Loaded: loaded (/etc/systemd/system/mnt-scratch.automount; enabled; vendor preset: disabled)
Active: active (running) since Mon 2016-04-18 10:49:04 CEST; 4h 33min ago
Where: /mnt/scratch
Apr 18 10:49:04 oat systemd[1]: Set up automount Automount Scratch.
Apr 18 10:49:14 oat systemd[1]: mnt-scratch.automount: Got automount request for /mnt/scratch, triggered by 20266 (zsh)
$ systemctl status mnt-scratch.mount
● mnt-scratch.mount - Scratch
Loaded: loaded (/proc/self/mountinfo; disabled; vendor preset: disabled)
Active: active (mounted) since Mon 2016-04-18 10:49:16 CEST; 4h 33min ago
Where: /mnt/scratch
What: nfs.example.com:/export/scratch
Apr 18 10:49:14 oat systemd[1]: Mounting Scratch...
Apr 18 10:49:16 oat systemd[1]: Mounted Scratch.
</code></pre>
<p>[1] <a href="http://codingbee.net/tutorials/rhcsa/rhcsa-automounting-using-systemd/">http://codingbee.net/tutorials/rhcsa/rhcsa-automounting-using-systemd/</a></p>Building docker images with two Dockerfileshttps://blog.tomecek.net/post/build-docker-image-in-two-steps/
Tue, 09 Feb 2016 13:12:21 +0100https://blog.tomecek.net/post/build-docker-image-in-two-steps/<p>So I got asked about this topic after my DevConf 2016
<a href="https://devconfcz2016.sched.org/event/5lzf/is-it-hard-to-build-a-docker-image">talk</a>:
there is <a href="https://github.com/docker/docker/issues/13490#issuecomment-156554857">a
solution</a>
available on internets which describes how one can use two dockerfiles to build
an image. Whole article can be found <a href="http://resources.codeship.com/ebooks/continuous-integration-continuous-delivery-with-docker">
here</a>.</p>
<p>What I didn&rsquo;t like about the solution is that the first image outputs whole
build artifact as a tarball to standard output. To me that&rsquo;s a bit hacky. Since
docker 1.8 you can <code>cp</code> files and directories between containers and host.
Let&rsquo;s try to do that!</p>
<p>All of this is because of build secrets. It may happen that you need to
authenticate with an external service when building a docker image. In order to
do that, you need to have a secret available during build. That&rsquo;s a problem.
This key may leak into a final image (whether via <code>docker history</code> or will be
available directly in some layer).</p>
<p>Here&rsquo;s a solution!</p>
<p>Split your build process into two steps, each step represents its own dockerfile.</p>
<ol>
<li><p>Authenticate with external service in order to fetch sources (use private
SSH key to authenticate with GitHub so you can clone a repo) and build the
project.</p></li>
<li><p>Get build artifacts from step 1 and install them.</p></li>
</ol>
<p></p>
<h2 id="let-s-do-this">Let&rsquo;s do this!</h2>
<p>First we need to write a Dockerfile which is able to fetch and build the project:</p>
<pre><code class="language-dockerfile">FROM fedora:23
RUN dnf install -y git
# this is the private key you DON'T want to get leaked
COPY id_rsa /
# just for the demo; we are not using the key actually
RUN git clone https://github.com/TomasTomecek/sen /project &amp;&amp; \
cd /project &amp;&amp; \
python3 ./setup.py build
# make clean would make sense here
</code></pre>
<p>Let&rsquo;s get the key:</p>
<pre><code class="language-shell">cp -a ~/.ssh/id_rsa id_rsa
</code></pre>
<p>and don&rsquo;t forget to blacklist the key in <code>.gitignore</code>!</p>
<pre><code class="language-shell">printf &quot;id_rsa\n&quot; &gt;.gitignore
</code></pre>
<p>Build time!</p>
<pre><code>docker build --tag=build-image .
</code></pre>
<p>We can copy the build artifact from build container now:</p>
<pre><code class="language-shell">docker create --name=build-container build-image cat
docker cp build-container:/project ./build-artifact
</code></pre>
<p>You are free to inspect and post-process the artifact:</p>
<pre><code class="language-shell">ls -lha ./build-artifact
</code></pre>
<p>Everything is fine? If so, let&rsquo;s build the final image.</p>
<pre><code class="language-shell">docker build -f Dockerfile.release --tag=sen .
</code></pre>
<p>Is the key in final image?</p>
<pre><code class="language-shell">cat ./test-if-key-is-present.sh
if docker run sen test -f /id_rsa
then
printf &quot;Key is in final image!\n&quot;
exit 2
else
printf &quot;Key is not in final image.\n&quot;
fi
</code></pre>
<pre><code class="language-shell">./test-if-key-is-present.sh
Key is not in final image
</code></pre>
<p>Whole solution is available in <a href="https://github.com/TomasTomecek/two-step-build">this GitHub repo</a>.</p>