Docker plugin for OpenStack Heat

Lars Kellogg-Stedman

I have been looking at both Docker and OpenStack recently. In my last
post I talked a little about the Docker driver for Nova; in
this post I'll be taking an in-depth look at the Docker plugin for
Heat, which has been available since the Icehouse release but is
surprisingly under-documented.

The release announcement on the Docker blog includes an
example Heat template, but it is unfortunately grossly inaccurate and
has led many people astray. In particular:

It purports to but does not actually install Docker, due to a basic
YAML syntax error, and

Even if you were to fix that problem, the lack of synchronization
between the two resources in the template would mean that you would
never be able to successfully launch a container.

In this post, I will present a fully functional example that will work
with the Icehouse release of Heat. We will install the Docker plugin
for Heat, then write a template that will (a) launch a Fedora 20
server and automatically install Docker, and then (b) use the Docker
plugin to launch some containers on that server.

Installing the Docker plugin

The first thing we need to do is install the Docker plugin. I am
running RDO packages for Icehouse locally, which do not include
the Docker plugin. We'r going to install the plugin from the Heat
sources.

This will result in a directory called heat in your current
working directory. Change into this directory:

$ cd heat

Patch the Docker plugin.

You have now checked out the master branch of the Heat
repository; this is the most recent code committed to the project.
At this point we could check out the stable/icehouse branch of
the repository to get the version of the plugin released at the
same time as the version of Heat that we're running, but we would
find that the Docker plugin was, at that point in time, somewhat
crippled; in particular:

It does not support mapping container ports to host ports, so
there is no easy way to expose container services for external
access, and

It does not know how to automatically pull missing images, so
you must arrange to run docker pull a priori for each image you
plan to use in your Heat template.

That would make us sad, so instead we're going to use the plugin
from the master branch, which only requires a trivial change in
order to work with the Icehouse release of Heat.

Look at the file
contrib/heat_docker/heat_docker/resources/docker_container.py.
Locate the following line:

attributes_schema = {

Add a line immediately before that so that the file look like
this:

attributes.Schema = lambda x: x
attributes_schema = {

If you're curious, here is what we accomplished with that
additional line:

The code following that point contains multiple stanzas of the
form:

INFO: attributes.Schema(
_('Container info.')
),

In Icehouse, the heat.engine.attributes module does not have a
Schema class so this fails. Our patch above adds a module
member named Schema that simply returns it's arguments (that
is, it is an identity function).

(NB: At the time this was written, Heat's master branch was
at a767880.)

Install the Docker plugin into your Heat plugin directory, which
on my system is /usr/lib/heat (you can set this explicitly using
the plugin_dirs directive in /etc/heat/heat.conf):

Templates: Installing docker

We would like our template to automatically install Docker on a Nova
server. The example in the Docker blog mentioned earlier
attempts to do this by setting the user_data parameter of a
OS::Nova::Server resource like this:

user_data: #include https://get.docker.io

Unfortunately, an unquoted # introduces a comment in YAML, so
this is completely ignored. It would be written more correctly like
this (the | introduces a block of literal text):

user_data: |
#include https://get.docker.io

Or possibly like this, although this would restrict you to a single
line and thus wouldn't be used much in practice:

user_data: "#include https://get.docker.io"

And, all other things being correct, this would install Docker on a
system...but would not necessarily start it, nor would it configure
Docker to listen on a TCP socket. On my Fedora system, I ended up
creating the following user_data script:

This takes care of making sure our packages are current, installing
Docker, and arranging for it to listen on a tcp socket. For that last
bit, we're creating a new systemd socket file
(/etc/systemd/system/docker-tcp.socket), which means that systemd
will actually open the socket for listening and start docker if
necessary when a client connects.

Templates: Synchronizing resources

In our Heat template, we are starting a Nova server that will run
Docker, and then we are instantiating one or more Docker containers
that will run on this server. This means that timing is suddenly very
important. If we use the user_data script as presented in the
previous section, we would probably end up with an error like this in
our heat-engine.log:

This happens because it takes time to install packages. Absent any
dependencies, Heat creates resources in parallel, so Heat is happily
trying to spawn our Docker containers when our server is still
fetching the Docker package.

Heat does have a depends_on property that can be applied to
resources. For example, if we have:

Looks good, but this does not, in fact, help us. From Heat's
perspective, the dependency is satisfied as soon as the Nova server
boots, so really we're back where we started.

The Heat solution to this is the AWS::CloudFormation::WaitCondition
resource (and its boon companion, the and
AWS::CloudFormation::WaitConditionHandle resource). A
WaitCondition is a resource this is not "created" until it has
received an external signal. We define a wait condition like this:

With this in place, Heat will not attempt to create the Docker
container until we signal the wait condition resource. In order to do
that, we need to modify our user_data script to embed the
notification URL generated by heat. We'll use both the get_resource
and str_replaceintrinsic function in order to generate the appropriate
script:

The str_replace function probably deserves a closer look; the
general format is:

str_replace:
template:
params:

Where template is text content containing 0 or more things to be
replaced, and params is a list of tokens to search for and replace
in the template.

We use str_replace to substitute the token $WAIT_HANDLE with the
result of calling get_resource on our docker_wait_handle resource.
This results in a URL that contains an EC2-style signed URL that will
deliver the necessary notification to Heat. In this example we're
using the cfn-signal tool, which is included in the Fedora cloud
images, but you could accomplish the same thing with curl:

You need to have correctly configured Heat in order for this to work;
I've written a short companion article that contains a checklist
and pointers to additional documentation to help work around some
common issues.

Templates: Defining Docker containers

Now that we have arranged for Heat to wait for the server to finish
configuration before starting Docker contains, how do we create a
container? As Scott Lowe noticed in his blog post about Heat and
Docker, there is very little documentation available out there
for the Docker plugin (something I am trying to remedy with this blog
post!). Things are not quite as bleak as you might think, because
Heat resources are to a certain extent self-documenting. If you run:

$ heat resource-template DockerInc::Docker::Container

You will get a complete description of the attributes and properties
available in the named resource. The parameters section is probably
the most descriptive:

The port_specs and port_bindings parameters require a little
additional explanation.

The port_specs parameter is a list of (TCP) ports that will be
"exposed" by the container (similar to the EXPOSE directive in a
Dockerfile). This corresponds to the PortSpecs argument in the the
/containers/create call of the Docker remote API.
For example:

port_specs:
- 3306
- 53/udp

The port_bindings parameter is a mapping that allows you to bind
host ports to ports in the container, similar to the -p argument to
docker run. This corresponds to the
/containers/(id)/start call in the Docker remote API.
In the mappings, the key (left-hand side) is the container port, and
the value (right-hand side) is the host port.

For example, to bind container port 3306 to host port 3306:

port_bindings:
3306: 3306

To bind port 9090 in a container to port 80 on the host:

port_bindings:
9090: 80

And in theory, this should also work for UDP ports (but in practice
there is an issue between the Docker plugin and the docker-py Python
module which makes it impossible to expose UDP ports via port_specs;
this is fixed in PR #310 on GitHub).

port_bindings:
53/udp: 5300

With all of this in mind, we can create a container resource
definition:

This uses the get_attr function to get the floating_ip_address
attribute from the docker_server_floating resource, which you can
find in the complete template. We take the return value from that
function and use str_replace to substitute that into the
docker_endpoint URL.

The pudding

Using the complete template with an appropriate local environment
file, I can launch this stack by runnign: