Dynamic Docker links with an ambassador powered by etcd

February 27, 2014 · By Alex Polvi

The ambassador pattern is a novel way to deploy sets of containers that are configured at runtime via the Docker Links feature.

In this proof of concept demo, we demonstrate deploying a redis instance that is registered with etcd. This demo is similar to the documented redis example for Docker links, except distributed across multiple hosts. A redis client will be linked with a redis server, regardless of what physical host either is hosted on, using a dynamic proxy that is reading the registration data off of etcd. The end goal is to be able to arbitrary deploy all of these containers using fleet and everything will be configured at runtime.

We will need the following set of containers to make all this happen.

Host A:

crosbymichael/redis - An off-the-shelf redis server.

polvi/docker-register - A docker/etcd registration container. This container will read off of the docker API and publish data to etcd.

polvi/simple-amb - This is a very simple ambassador that will forward traffic to the configured location passed via an arg. This is used to Link the docker registration container with etcd. This container could be removed on CoreOS, because etcd is at a known location, but is used for the purposes of demonstrating static versus dynamic ambassadors.

Host B:

polvi/dynamic-etcd-amb - This is where the magic happens. This is a dynamic proxy, powered by etcd, that watches a known etcd key and routes traffic to the container registered at the key. The key can change at runtime, and the proxy will update itself. If multiple instances are registered to the key space, the ambassador will start load balancing traffic to both containers.

relateiq/redis-cli - This is an off-the-shelf redis client for purposes of demonstrating the etcd powered ambassador.

The net result will look something like this:

Preparing for fleet

In order to deploy these set of containers, we will write a group of systemd service files that are fleet aware. This will allow us to deploy a whole set of containers and let them be scheduled arbitrary across the cluster of CoreOS hosts. These service files are re-usable on non-CoreOS systemd distros, assuming that docker is installed.

The easiest way to test this yourself is to spin up a CoreOS cluster on EC2 using our Cloud Formation "launch now" button.

Example service files

Below is a set of service files corresponding to the containers mentioned above, to be deployed via fleet, and supervised with systemd.

This container uses a bash trick to get the IP from eth0. We need this because we are going to look-up the IP/port combination and register it with etcd using the docker port command. By default, 0.0.0.0 will be registered if no IP is specified. This will not work as we need to know the network address of the host that is running the container.

Note: you will need to expose the 49000-50000 port range in your EC2 security group, if you are using our cloud formation.

We use the systemd %n variable to name the container the same as the systemd unit, in this case redis-demo.service. This is important for the registration of the container with etcd.

polvi/simple-amb will forward all traffic it gets on port 10000 to the argument provided. In this case, 172.17.42.1:4001 is the known address for etcd on every CoreOS instance from inside of a Docker container-- we we simply statically forward all traffic there.

X-ConditionMachineOf tells fleet to schedule this systemd unit to the same machine as wherever redis-demo.service gets scheduled to. You can read more docs on the X-Fleet section over on the fleet docs.

redis-docker-reg.service

A container that reads port information off the docker API, and heartbeats it to etcd.

This launches the polvi/docker-register container, as described above, that will read the IP and port information off of the docker API and publish that data to etcd under the service name of redis-A. There are two important aspect of this container.

It requires an etcd to publish to, so we -link in the simple ambassador pointing to etcd.

It talks to the host docker instance, so we use a docker volume to bind mount in the host docker.sock into the container. Note, this gives the container full control of the host dockerd! This is a security issue, but required for the container to read and then publish the port that was assigned by docker.

This unit also has a X-ConditionMachineOf to schedule it to the same machine as redis-demo.service. Finally, we use the After systemd directive to make sure the process is started after etcd-amb-redis.service (on the same machine) to make sure that it has a etcd to talk to when it is launched.

Host-B units

etcd-amb-redis2.service

A second simple etcd ambassador container, scheduled via fleet on a different host than redis-demo.service.

This unit uses the fleet directive, X-Conflicts, to make sure that it gets scheduled to a host that is not the same as where redis-demo.service was scheduled, guaranteeing these containers will be on two different hosts.

This tells the proxy to expose port 6379 and point it to the service registered as redis-A in etcd. X-ConditionMachineOf is used again to make it it gets deployed to where our second host matching wherever etcd-amb-redis2.service was deployed.

Deploying and testing with fleet

To deploy this with fleet, we simply run fleetctl start *.service in a directory containing all of these service files. All units will be scheduled across the cluster with our topology requirements in place.

To test that everything worked as expected, we will ssh to the host that is running the dynamic proxy and manually interact with it using a redis client container, relateiq/redis-cli, and a docker -link.

This command will ssh us to the host where that service is running.

fleetctl ssh -u redis-dyn-amb.service

From there, we launch a docker container with the redis client, on the shell: