I've had the opportunity to setup a complete new Docker-based microservice architecture at my current job, so since everyone is sharing their Docker tips and tricks, I thought I'd do the same thing. So here are a list of tips, tricks or whatever you might call it, that you might find useful in your day to day dealing with Docker.

1. Multiple Dockers at the Same Host

If you want you can run multiple Docker containers on the same host. This is especially useful if you want to set up different TLS settings, network settings, log settings or storage drivers for a specific container. For instance we currently run a standard set up of two Docker daemons. One runs consul which provides DNS resolution and serves as the cluster store for the other Docker container.

2. Docker Exec of Course

This is probably one of the tips that everyone mentions. When you're using Docker not just for your staging, production or testing environments, but also on your local machine to run database, servers, keystores etc. it is very convenient to be able to run commands directly within the context of a running container.

We do a lot with Cassandra, and checking whether the tables contain correct data, or if you just want to execute a quick CQL query Docker exec works great:

And this can of course be applied to any (client) tool bundled with an image. I personally find this much easier than installing all the client libraries locally and having to keep the versions up to date.

3. Docker Inspect and JQ

This isn't so much a Docker tip, as it is a JQ tip. If you haven't heard of JQ, it is a great tool for parsing JSON from the command line. This also makes it a great tool to see what is happening in a container instead of having to use the --format specifier which I can never remember how to use exactly:

And of course also works great for querying other kinds of (Docker-esque) APIs that produce JSON (e.g Marathon, Mesos, Consul etc.). JQ provides a very extensive API for accessing and processing JSON. More information can be found here: https://stedolan.github.io/jq/

4. Extending an Existing Container and Pushing it to a Local Registry

On the central Docker hub there are a great number of images available that will serve for many different use cases. What we noticed though, is that often we had to make some very small changes to the images. For instance for better health checks from consul, to better behave in our cluster setup or to add additional configuration that wasn't easy to do through system variables or command line parameters. What we usually do if we run into this is to just create our own Docker image and push it to our local registry.

For instance, we wanted to have JQ available on our consul image to make health checking our services easier:

With our health check scripts and JQ we do the health checks from our own consul image. We also have a local registry running so after image creation we just tag the resulting image and push it to our local registry:

Now it is available to our developers, and can also be used in our different testing environments (we use a separate registry for production purposes).

5. Accessing Docker on Remote Hosts

The Docker CLI is a very cool tool. One of the great features is that you can use it to easily access multiple Docker daemons even if they are on different hosts. All you have to do is set the DOCKER_HOST environment variable to point to the listening address of the Docker daemon, and, if the port is of course reachable, you can directly control Docker on a remote host. This is pretty much the same principle that is used by docker-machine when you run a Docker daemon and set up the environment through docker-machine env :

But you don't have to limit yourself to just Docker daemons started through the docker-machine, if you've got a controlled and well secured network where your daemons are running, you can just as easily control all the from a single machine (or stepping stone).

6. The Ease of Mounting Host Directories

When you're working with containers, you sometimes need to get some data inside the container (e.g. shared configuration). You can copy it in, or ssh it in, but most often it is easiest to just add a host directory to the container that is mounted inside the container. You can easily do this in the following manner:

As you can see the directory we specified is mounted inside the container, and any files we put there are visible on both the host and inside the container. We can also use inspect to see what is mounted where:

7. Add DNS Resolving to Your Containers

I've already mentioned that we use consul for our containers. Consul is a distributed KV store which also provides service discovery and health checks. For service discovery Consul provides either a REST API or plain old DNS. The great part is that you can specify the DNS server for your containers when you run a specific image. So when you've got Consul running (or any other DNS server) you can add it to your Docker daemon like this:

Now we can resolve the ip address of any container registered with Consul by name. For instance in our environment we've got a cassandra cluster. Each cassandra instance registers itself with the name 'Cassandra' to our Consul cluster. The cool thing is that we can now just resolve the address of cassandra based on host name (without having to use Docker links).

8. Docker-UI is a Great Way to View and Get Insight Into Your Containers

Managing Docker using the Docker CLI isn't that hard and provides great insides in what is happening. Often though you don't need the full power of the Docker CLI but just want a quick overview of which containers are running and see what is happening. For this a great project is Docker UI (https://github.com/crosbymichael/dockerui):

With this tool, you can see the most important aspects of the containers and images of a specific Docker daemon.

9. Container Not Starting Up? Overwrite the Entry Point and Just Run it From Bash

Sometimes a container just doesn't do what you want it to do. You've recreated the Docker image a couple of times but somehow the application you run on startup doesn't behave as you expect and the logging shows nothing useful. The easiest way to debug this is to just overwrite the entry point of the container and see what is going on inside the container. Are the file permissions right, did you copy the right files into the image or any of the other 1000 things that could go wrong.

Luckily Docker has a simple solution for this. You can start a container with an entrypoint of your choosing:

10. Listening to Events Within a Container

When you're writing your own scripts or just want to learn what is happening with your running images you can use the Docker event command. Writing scripts for this is very easy.

That's it for now and we haven't each touched upon Docker Compose and Swarm yet, or the Docker 1.9 network overlay features! Docker is a fantastic tool, with a great set of additional tools surrounding it. In the future I'll show some more stuff we've done with Docker so far.