Fedora Atomic, OpenStack, and Kubernetes (oh my)

While experimenting with Fedora Atomic, I was looking for an
elegant way to automatically deploy Atomic into an OpenStack
environment and then automatically schedule some Docker containers
on the Atomic host. This post describes my solution.

Like many other cloud-targeted distributions, Fedora Atomic runs
cloud-init when the system boots. We can take advantage of this
to configure the system at first boot by providing a user-data blob
to Nova when we boot the instance. A user-data blob can be as
simple as a shell script, and while we could arguably mash everything
into a single script it wouldn’t be particularly maintainable or
flexible in the face of different pod/service/etc descriptions.

In order to build a more flexible solution, we’re going to take
advantage of the following features:

Cloud-init recognizes a number of specific MIME types (such as
text/cloud-config or text/x-shellscript). We can provide a
custom part handler that will be used to handle MIME types not
intrinsincally supported by cloud-init.

A custom part handler for Kubernetes configurations

When the part handler is first initialized it will ensure the
Kubernetes is started. If it is provided with a document matching one
of the above MIME types, it will pass it to the appropriate kubecfg
command to create the objects in Kubernetes.

Creating multipart MIME archives

I have also created a modified version of the standard
write-multipart-mime.py Python script. This script will inspect the
first lines of files to determine their content type; in addition to
the standard cloud-init types (like #cloud-config for a
text/cloud-config type file), this script recognizes:

You would obviously need to substitute values for --image and
--key-name that are appropriate for your environment.

Details, details

If you are experimenting with Fedora Atomic 21, you may find out that
the above example doesn’t work – the official mysql image generates
an selinux error. We can switch selinux to permissive mode by putting
the following into a file called disable-selinux.sh:

Problems, problems

This works and is I think a relatively elegant solution. However,
there are some drawbacks. In particular, the custom part handler
runs fairly early in the cloud-init process, which means that it
cannot depend on changes implemented by user-data scripts (because
these run much later).

A better solution might be to have the custom part handler simply
write the Kubernetes configs into a directory somewhere, and then
install a service that launches after Kubernetes and (a) watches that
directory for files, then (b) passes the configuration to Kubernetes
and deletes (or relocates) the file.