Organizing Distributed Apps with Docker Compose

I spend the first five chapters of Docker on Windows running existing .NET Framework applications in Docker, packaging new .NET Core applications in Docker, and design strategies using third-party apps in containers.

You can use Docker to decompose monoliths and add new features to your apps, and the end result is lots of containers to manage which have dependencies between them. #25 in this series finished with 8 containers running a modernized version of the classic Nerd Dinner ASP.NET app.

I use a PowerShell script to run up all the containers, but that's just a simple option that keeps the focus on what the containers are doing. As soon as you have more than one container in your app, you'll be using Docker Compose to manage it.

Docker Compose is a separate client from the normal docker CLI. It uses a YAML file to define desired application state and makes calls to the Docker API to deploy apps.

The Docker Compose syntax is very simple and it's a great way to define the structure of a distributed app. Think of the Dockerfile as the deployment guide for each component of your app, and the Compose file as the deployment guide for the whole application.

You can use Docker Compose on a single-node Docker environment and in a cluster. So the same application definition gets used by developers with Docker Desktop right through to production with Docker Enterprise.

Defining Distributed Applications with Docker Compose

You define applications in Compose in terms of services rather than containers. Services will be deployed as containers - but a service could be multiple instances of a single container, so it's an abstraction beyond managing containers on hosts.

Here's a snippet of the full Compose file for the modernized Nerd Dinner app running in Windows containers:

the version is the Docker Compose schema version - different versions of Docker and Docker Compose support different features, and the Compose schema version helps enforce that

services are the top-level collection for all the components in the application. Each component has its own service definition

the nats service is the simplest - it defines the Docker image to use, and it connects the service containers to a Docker network. nats is the service name, and that gets used in Docker's DNS server, so when containers lookup the nats hostname, Docker will return the IP address of this service's container

the elasticsearch service has the Docker image to use, and it connects to the same Docker network, but it also specifies extra options for environment variables to surface in the containers, and Docker volumes to attach. These are equivalent to the -e and -v flags in the docker container run command

nerd-dinner-db is the SQL Server database for the Nerd Dinner app, with the schema packaged back in Windows Dockerfile #16. It has a volume for data, and it uses an environment file for environment variables.

Compose is powerful because it lets you configure services with all the options you need, but the YAML files are still simple and readable. The longest service definition is for the Nerd Dinner ASP.NET website:

This also includes the ports to publish, and the other services which this service needs, in the depends_on section.

In all the Compose file comes in at 87 lines, including whitespace. That's a pretty efficient way of describing a distributed app which runs across 8 services, using Nano Server and Windows Server Core, and running Go, Java, Node.js and .NET components. It's a standard format too which means you can use tools like docker-compose-viz to create visualizations:

Dependencies are worth a mention. On the desktop, Compose will start containers in the correct order to honour the dependencies. In a dynamic, clustered environment that doesn't apply - it limits the cluster too much if certain containers have ordering requirements.

Deploying and Managing Apps with Compose

You use the docker-compose command line with your Compose YAML file to manage your app. The key commands are up to deploy the app, down to stop the app and remove all containers, and build to build the images.

To run the app using Compose, just clone the repo, navigate to the directory and use docker-compose up:

Compose looks for a file called docker-compose.yml if you don't specify a filename. The YAML file in this directory defines images which are all public on Docker Hub, so Compose will pull those images if you don't have them locally.

Then Compose starts containers for all the services, in the right order to maintain the dependencies. The -d flag in Compose is the same as in docker container run - it just starts containers in the background.

This docker-compose.yml doesn't specify the scale for any services, so they'll all launch with the default - one container per service. The Compose file is the desired state, and when you run up Compose looks at the actual state in the Docker engine and creates what it needs to get to the desired state.

Check the running containers with docker container ls and you'll see the whole application stack is there, all in containers with names generated by Compose - which prepends the current directory name to the service name:

You can browse to the Nerd Dinner app container now, and use the app in the same way I described back in #25.

You can use Compose to scale the application components - provided the components are able to run in multiple instances without affecting each other. The message handlers are designed to run at scale in a dynamic environment, so they can be easily scaled up:

This will add a second container for the Elasticsearch index handler, and two more containers for the SQL Server save handler. They're running message handlers which connect to NATS and because they're designed for scale, they'll share the message processing load.

Compose recreates the database service - removing the old container and running a new one from the new image tag. The new container attaches to the same volume as the old container, so all the data in SQL Server is preserved, and the new column gets added when the database container startup script runs.

There are other services defined as being dependent on the database service - and the database service has changed, so those services get recreated too. And in this case, Compose also scales down the message handler services to a single container each.

Why does Compose scale down services which I've explicitly scaled up? Because the Compose file is the desired state - and my updated file doesn't specify any service scales, so the default is 1. Compose sees the running state with a greater scale and it removes containers to bring the service in line with the desired state.

This is a side-effect from mixing declarative deployment with the Compose file and imperative deployment with the --scale option.

It's better to stick to declarative deployment and make all updates through the Compose file - which lives in source control with your Dockerfiles and your app source.

Separating Concerns with Compose Overrides

You can also split your app definition across multiple Compose files. That's very handy to separate concerns - so you can include deployment options for dev and production in separate files, and have a central file for the core application definition.

docker-compose.yml defines the core application services, with options that apply in every environment

docker-compose.build.yml adds the build definitions for the custom Docker images. This gets used in docker-compose build by devs and in the CI pipeline, but not in other scenarios. Putting it in a separate file keeps the core Compose file clean