Watchtower: Automatic Updates for Docker Containers

Brian DeHamer

August 18, 2015

Share +

Last month, we launched a new tool named Zodiac which is used to deploy (and rollback) Docker apps. In his introductory post, Alex mentioned that the CTL Labs team has been exploring a few different ideas for deploying containerized applications. This is something that's become more important to us as we've moved from just creating images for other people to actually running containerized applications in production.

Today, we're going to present another approach to deployment that we call Watchtower.

Pull-Based Deployment

Whereas Zodiac uses a push-based deployment model (a user pushes an application to a remote system), Watchtower enables pull-based deployments. Watchtower is an application that will monitor your running Docker containers and watch for changes to the images that those containers were originally started from. If Watchtower detects that an image has changed, it will automatically restart the container using the new image.

With Watchtower you can update the running version of your containerized app simply by pushing a new image to the Docker Hub or your own image registry. Watchtower will pull down your new image, gracefully shut down your existing container, and restart it.

Watchtower will take care to ensure that any flags/arguments that were used when a container was deployed initially are maintained across restarts. If you specified a -p 80:8080 port mapping when you started your container, Watchtower will use that same configuration any time it restarts it.

Additionally, Watchtower knows how to properly restart a set of linked containers. If an update is detected for one of the dependencies in a group of linked containers, Watchtower will stop and start all of the containers in the correct order so that the application comes back up correctly.

This style of deployment is definitely not appropriate for all applications, but it may be useful to hear how we're using it within the CenturyLink Labs team.

Case Study

Back in May of this year, we launched ImageLayers which was the first hosted application released by our team. We had published a number of Docker images prior to that, but this was the first one that we were going to run on a public-facing server.

As part of our previous image creation efforts we had already developed a continuous integration process that was working well for us. Whenever a commit is pushed to the master branch of our GitHub repo a webhook triggers a CircleCI job which is responsible for running tests, compiling code (where appropriate), and building a Docker image. Once the image was built, our CircleCI job would then push the image to the Docker Hub.

For most of our images, the process stops there. However, with a hosted application, we needed the extra step of starting the container on one or more of our servers. Our initial pass at this had our CircleCI job SSH'ing into our servers and executing docker commands to pull images and stop/start containers. This worked but we realized that we were opening a port on our server just for our CI/CD process. Most of our servers sit on a private network behind a load balancer and don't require any sort of direct, incoming connection from the public internet.

Opening a public-facing port isn't the end of the world but, due to the security implications, is something we'd rather avoid if possible.

With that limitation in mind we started to think about ways we could do a pull-based deployment that was initiated directly from the hosting server. Our first implementation was a collection of shell scripts that looked at the Hub for new images, pulled them down and restarted the running containers.

This approach worked well enough for us that we decided to turn our shell scripts into a general-purpose service that could be used by anyone. Watchtower was born!

Now we have Watchtower running on each of our QA and production servers and our apps are automatically updated anytime a new image is pushed to the Docker Hub. We use different image tags for the different environments so the images destined for our QA server are tagged with "qa" while anything tagged with "latest" goes to production.

Limitations

There are a few important Watchtower limitations that should be noted. First, Watchtower doesn't make any attempt to address the first-run issue. Watchtower will monitor running containers and ensure that they stay up-to-date, but getting those containers running in the first place is outside of its scope.

The initial start-up of your application will need to be handled by something like Docker Compose, Zodiac or plain ol' docker run. In the future, we may add a feature to Watchtower which would allow it to boot-strap your application, but even then you're still gonna need a way to start Watchtower itself.

The second Watchtower limitation relates to application downtime. While Watchtower is pretty quick at restarting containers, it makes no guarantees about application downtime. Since the old container needs to be stopped before the new one can be started there will be some (hopefully small) period of time where the container isn't running at all.

For many of our use cases one second of downtime during a deployment is no big deal, however, we recognize that there are plenty of applications which may have stricter requirements. If you have an application that is sensitive to downtime, you'll want to be careful how you use Watchtower.

Luckily, applications with 100% uptime requirements are probably already running with multiple instances. In that case, you may still be able to use Watchtower -- just so long as not all of your containers get updated at precisely the same time.

Comment & Contribute

Please take Watchtower for a spin and let us know if you find it useful. If you have comments or ideas for improvement you can leave feedback on our GitHub Issues page.