Running nova-libvirt and nova-docker on the same host

I regularly use OpenStack on my laptop with libvirt as my
hypervisor. I was interested in experimenting with recent versions of
the nova-docker driver, but I didn’t have a spare system available
on which to run the driver, and I use my regular nova-compute service
often enough that I didn’t want to simply disable it temporarily in
favor of nova-docker.

NB As pointed out by gustavo in the comments, running two
neutron-openvswitch-agents on the same host – as suggested in this
article – is going to lead to nothing but sadness and doom. So
kids, don’t try this at home. I’m leaving the article here because I
think it still has some interesting bits.

I guess the simplest solution would be to spin up a vm on which to run
nova-docker, but why use a simple solution when there are things to
be learned? I wanted to know if it were possible (and if so, how) to
run both hypervisors on the same physical host.

The naive solution would be to start up another instance of
nova-compute configured to use the Docker driver. Unfortunately,
Nova only permits a single service instance per “host”, so starting up
the second instance of nova-compute would effectively “mask” the
original one.

Fortunately, Nova’s definition of what constitutes a “host” is
somewhat flexible. Nova supports a host configuration key in
nova.conf that will cause Nova to identify the host on which it is
running using your explicitly configured value, rather than your
system hostname. We can take advantage of this to get a second
nova-compute instance running on the same system.

Install nova-docker

We’ll start by installing the nova-docker driver from
https://github.com/stackforge/nova-docker. If you’re running the
Juno release of OpenStack (which I am), you’re going to want to use
the stable/juno branch of the nova-docker repository. So:

Configure nova-docker

Now, rather than configuring /etc/nova/nova.conf, we’re going to
create a new configuration file, /etc/nova/nova-docker.conf, with
only the configuration keys that differ from our primary Nova
configuration:

You can see that we’ve set the value of host to nova-docker, to
differentiate this nova-compute service from the libvirt-backed
one that is already running. We’ve provided the service with a
dedicated log file and state directory to prevent conflicts with the
already-running nova-compute service.

To use this configuration file, we’ll launch a new instance of the
nova-compute service pointing at both the original configuration
file, /etc/nova/nova.conf, as well as this nova-docker
configuration file. The command line would look something like:

The ordering of configuration files on the command line is
significant: later configuration files will override values from
earlier files.

I’m running Fedora 21 on my laptop, which uses systemd, so I
created a modified version of the openstack-nova-compute.service
unit on my system, and saved it as
/etc/systemd/system/openstack-nova-docker.service:

Booting a Docker container (take 1)

Let’s try starting a Docker container using the new nova-compute
service. We’ll first need to load a Docker image into Glance (you
followed the nova-dockerinstructions for configuring
Glance, right?). We’ll use my larsks/thttpd image,
because it’s very small and doesn’t require any configuration:

And this tells us our problem: we have told our nova-docker service
that it is running on a host called “nova-docker”, and Neutron doesn’t
know anything about that host.

NB

If you were to try to delete this failed instance, you would find that
it is un-deletable. In the end, I was only able to delete it by
directly editing the nova database using this sql script.

Adding a Neutron agent

We’re going to need to set up an instance of
neutron-openvswitch-agent to service network requests on our
“nova-docker” host. Like Nova, Neutron also supports a host
configuration key, so we’re going to pursue a solution similar to what
we used with Nova by creating a new configuration file,
/etc/neutron/ovs-docker.conf, with the following content:

[DEFAULT]
host = nova-docker

And then we’ll set up the corresponding service by dropping the
following into /etc/systemd/system/docker-openvswitch-agent.service:

While working on this configuration I ran into an undesirable
interaction between Docker and systemd’s PrivateTmp directive.

This directive causes the service to run with
a private mount namespace such that /tmp for the service is not
the same as /tmp for other services. This is a great idea from a
security perspective, but can cause problems in the following
scenario:

Start a Docker container with nova boot ...

Restart any service that uses the PrivateTmp directive

Attempt to delete the Docker container with nova delete ...

Docker will fail to destroy the container because the private
namespace created by the PrivateTmp directive preserves a reference
to the Docker devicemapper mount in
/var/lib/docker/devicemapper/mnt/... that was active at the time the
service was restarted. To recover from this situation, you will need
to restart whichever service is still holding a reference to the
Docker mounts.

I have posted to the systemd-devel mailing
list to see if there are any solutions to this behavior. As I note in
that email, this behavior appears to be identical to that described in
Fedora bug 851970, which was closed two years ago.

Update I wrote a separate post about this issue, which
includes some discussion about what’s going on and a solution.