This article will describe several alternatives for gathering Docker containers logs in the distributed environment of an Apache Marathon and Mesos cluster, like Syslog, Container linking, Docker REST API, embedded logging piping STDOUT/STDERR, and Mesos APIs. We’ll go through a problem statement, different alternatives, and describe the challenges related to each of them.

By the way, the Docker REST API combined with intelligent "Docker inspect" hooks does a great job.

Background

At elastic.io we have built an integration platform for developers, with the best possible environment to code, test and run integration flows. An integration flow is a sequence of integration components that are connected to each other. Each integration component is an individual process running in a Docker container that communicates via a persistent RabbitMQqueue with the next component. We provide tooling and monitoring on top of that, and showing logs of integration components is an important part of it.

A Logging Problem in Docker Containers

We have a large number of Docker containers and we need to aggregate logs from it so that we could show them to the users. Docker containers are running inside Apache Mesos and scheduled with Mesosphere Marathon with varying number of Mesos Slaves. Our goal is to support all programming languages (that are running inside Docker containers), so we can’t really impose any specific logging framework. Therefore, our options are limited to grabbing STDOUT and STDERR and pushing them to persistent storage, e.g. to S3. Which is, by the way, not too far away from the 12 Factor Apps logging concept that actually proves our point here.

Another requirement for the solution is that we have to encrypt the log output. Security is a very important part of what we do, and log output may contain sensitive information. So, we treat logs just like user data — logs have to be encrypted with a tenant-specific key.

Implementation Alternatives

After some Googling, we identified following alternatives:

Alternative A: Mounted volume – Store container logs on the mounted volume and pick them up from there.

Alternative B: Syslog – Aggregate logs within a container and push them somewhere over the network, e.g. via syslog.

Alternative C: Docker API – Use a Docker REST or CLI API and attach to each container after the start.

Alternative A: Mounted Volume

It’s a great solution: Just mount /var/ logs to the outside of the container and use other tools like logstash to collect them. So, the main advantage is simplicity.

However, there are the following disadvantages:

We don’t know what applications will be run inside our Docker containers, assuming that logging will be pushed to the filesystem

Enforcing that logging will be done to a file is also against the 12 Factor Apps logging concept.

As we work inside a Mesos/Marathon cluster, we would have to make sure that log-collector agents would be active on all Mesos slaves.

Disc capacity; this would have been partially solved via a Mesos sandbox, but would become a problem again when mounting to an outside volume.

This solution would be ideal for us if we had packaged existing applications. But as that's not the case, we decided not to proceed with this one. However, if you like it and it fits your case, here is a nice blog post about it.

Alternative B: Syslog

We could aggregate logs within a container and then just push them to syslog. Syslog is a great UNIX tool for logging with a long history. Hence, it is very stable and reliable.

This solution would involve connecting and exposing a syslog port in or outside of the container, and pushing data from the container into it. The best way to do that is by linking containers together. This solution has several advantages over the previous (filesystem mount) solution, like:

No need to do filesystem mounts, network is simpler to use.

Syslog will get the logging information pushed, it doesn't have to poll the FS for that.

More flexible deployment options.

There are, however, also some drawbacks:

Container needs to know about syslog and push the logs internally, so we're imposing some limitations on containers again.

Container linking concept compete with some of the Mesos and Marathon concepts. So, it doesn’t seem to be right to use container linking without a proper network virtualization layer.

To minimize network load, we would have to deploy a Docker container with syslog on all slaves.

Alternative C: Docker API

The Docker API solution involves following steps:

Monitor the start of new Docker containers on the Mesos slaves.

As soon as a new container is started, attach to it and grab all STDOUTs and STDERRs from it.

Encrypt it and push to the appropriate storage (e.g. S3).

The advantage of this solution is that no assumption will be made on the application inside a container. All logs that are sent to STDOUT and STDERR will be forwarded.

The drawbacks are, however, following:

Network communication is required, so we have to work with distributed ‘agents’ on localhost to minimize it.

Logging agents would need access to the Docker API, so this represents a potential security issue.

We would need a clever way to access the Docker API and make it in a secure and reliable way.

We would need to monitor the uptime of the logging agents to mitigate their failures.

More details about this solution below.

The Solution — Say Hello to Boatswain

As you might have already guessed, the very last solution is the one that we have implemented, and I have to say, so far it works like a charm. Our distributed Docker logging agent called Boatswain and it does a great job aggregating logging information from Docker containers that run on our Mesos cluster. And guess what, it’s less than 100 code lines long.

We use the so called docker-allcontainers, that in its turn uses dockerode and never-ending-stream to access the Docker API. Boatswain will be notified about starts and stops for all new and existing containers from the local Docker daemon:

This app is packaged as a Docker container. We then use Apache Marathon to start and monitor it.

Obviously, somewhere along the way we've run into several other issues. Some of them were:

How to Connect to the Docker API?

As our logger process is deployed as a Marathon app, we needed a secure way to give it an access to Docker daemon running on the Mesos slave. Pavel and George, our developers, found a nice way to do that – they just mounted a Docker socket inside the Docker container. Here is how it looks like in our Marathon app descriptor file:

As the container is running as root and the Mesos daemon is also running as root (it has to start Docker containers somehow too), we have a nice socket-based solution that imposes no network load at all. IMHO, it’s the best way to use the Docker API from one of the Marathon apps.

Note: Make sure your Marathon version already supports the volumes in the configuration.

How to Deploy Logger Collector App?

So, how do we make sure that each Mesos slave has exactly one instance of Boatswain up and running? A good solution here is Marathon constraints, and this is our Marathon app descriptor:

{
"id": "boatswain",
"constraints": [
[
"hostname",
"UNIQUE"
]
],

Now we just need to scale our app to an exact number of slaves, and Marathon will not only distribute boatswain to all slaves, but it will also make sure it will be restarted in case of a shutdown.

There is, however, one little problem: When we increase the number of slaves, we need to update the boatswain application descriptor. We could, of course, set a very large number of instances required in the first place and make sure that Marathon only starts on each slave. However, this would also lead to the ‘pending’ status of Boatswain deployment in Marathon UI, which is not nice.

We still need to see what will be the best solution here.

How to Know Which App is Running in Which Container?

As we make no assumptions about the code running inside containers, we have to find a reliable way to identify the containers and associate them with particular integration components of a particular tenant running on our system. This is quite a significant issue. What we have over the Docker API is a container ID which is essentially a randomly generated UUID. Neither Marathon nor Mesos gives us a reliable way to transport any way of identification down to the Docker container (e.g. name the container like Marathon-App-SlaveID-Random or something similar).

The solution we found for this issue is to inspect the container beforehand. When launching an integration component on Marathon, we give it a couple of environment variables so that for example it connects to RabbitMQ and decrypts messages from there. With Docker inspect, we gained access to the environment variables so that we could reliably identify the app inside the Docker container and encrypt the log files with a tenant-specific key.

Conclusion

We are quite happy with the resulting approach, it’s not only simple (<100 lines of code), but also a clever solution that uses technologies at hand and imposes no requirements on applications running in the Docker containers on top of Mesosphere Marathon and Apache Mesos.