I've been running a bunch of applications on Docker for a while now, but I have
managed the containers on the single machine level instead of as a cluster.
With the release of Swarm 1.0, I believe it is time to start clustering my
machines.

Spinning Up the Swarm

How to spin up a Swarm for development is described well in the Docker
documentation and I'm not going
to describe it in depth here. I'll settle for the commands and extra
documentation when I feel that it may be called for.

I'm using the Swarm for development with VirtualBox here, but it is simple to
substitute any of the supported docker-machine
providers.

Create a Token

Create a token with the Docker Hub discovery service. When running this in
production you should probably setup an alternate discovery backend
to avoid the external dependency.

Create a Swarm Manager

The swarm manager will be used to control the swarm. It should be protected
from access from anyone but you. I'll simulate this here by setting
--engine-label public=no. This is just a tag and you would have to make sure
that you setup the manager protected from public access. It is possible to use
multiple labels to tag the engine with all the qualities of this machine.

Starting the Reverse Proxy

Nginx is one of my favorite building blocks when it comes to building reliable
web services. Nginx provides an official Docker image,
but in this case, when I want to automatically configure Nginx when new containers
are started, I prefer to use an alternative image called nginx-proxy.

A container started from the nginx-proxy image, listens to events generated
by the docker engine. The engine generates events for all kinds of
events
but all we care about here is when a container is started and stopped. If you
want to see what events are triggered from the CLI, run docker events in one
terminal and start and stop a few containers in another.

When nginx-proxy receives an event that a container has been started it checks
if the container has any ports EXPOSEd, if it does it also checks for a
VIRTUAL_HOST environment variable. If both these conditions are fulfilled
nginx-proxy re-configures its Nginx server and reloads the configuration.

When you now access the VIRTUAL_HOST, Nginx proxies the connection to your web
service. Cool!

Naturally, you will have to configure your DNS to point to your Nginx server.
The easiest way to do this is to configure all your services to point to it
with a wildcard record. Something like this:

*.mysite.com Host (A) Default xxx.xxx.xxx.xxx

In this case, we are using VirtualBox and we can settle for changing the
/etc/hosts file with the IP-number of our frontend.

What is even more cool is that events works with Swarm and it is possible
to use the nginx-proxy to listen to services that are started on different
machines. All we have to do is configure it correctly.

Starting Nginx-Proxy

nginx-proxy is started with configuration read from the docker client
environment variables. All the environments variables were automatically
configured when you configured the docker client to access the Swarm, above.

Starting Web Services

As a web service I'm going to use a simple counter image since it can use both Postgres and Redis as backend.
I want to start the web services on the same server as the databases since this
allows me to use --link to connect to the container and it will speed up the
data access. To do this I can use an affinity constraint: --env
affinity:container==*redis*.