Running Docker in Jenkins (in Docker)

In this post we’re going to take a quick look at how you can mount the Docker sock inside a container in order to create “sibling” containers. One of my colleagues calls this DooD (Docker-outside-of-Docker) to differentiate from DinD (Docker-in-Docker), where a complete and isolated version of Docker is installed inside a container. DooD is simpler than DinD (in terms of configuration at least) and notably allows you to reuse the Docker images and cache on the host. By contrast, you may prefer to use DinD if you want to keep your images hidden and isolated from the host.

To explain how DooD works, we’ll take a look at using DooD with a Jenkins container so that we can create and test containers in Jenkins tasks. We want to create these container with the Jenkins user, which makes things a little more tricky than using the root user. This is very similar to the technique described by Pini Reznik in Continuous Delivery with Docker on Mesos In Less than a Minute, but we’re going to use sudo to avoid the issues Pini faced with adding the user to the Docker group.

We’ll be using the official Jenkins image as a base, which makes everything pretty straightforward.

Create a new Dockerfile with the following contents:

1

2

3

4

5

6

7

8

9

10

11

FROM jenkins:1.596

USER root

RUN apt-get update\

&&apt-get install-ysudo\

&&rm-rf/var/lib/apt/lists/*

RUN echo"jenkins ALL=NOPASSWD: ALL">>/etc/sudoers

USER jenkins

COPY plugins.txt/usr/share/jenkins/plugins.txt

RUN/usr/local/bin/plugins.sh/usr/share/jenkins/plugins.txt

We need to give the jenkins user sudo privileges in order to be able to run Docker commands inside the container. Alternatively we could have added the jenkins user to the Docker group, which avoids the need to prefix all Docker commands with ‘sudo’, but is non-portable due to the changing gid of the group (as discussed in Pini’s article).

The last two lines process any plug-ins defined in a plugins.txt file. Omit the lines if you don’t want any plug-ins, but I would recommend at least the following:

1

2

3

4

5

$cat plugins.txt

scm-api:latest

git-client:latest

git:latest

greenballs:latest

If you don’t want to install any plug-ins, either create an empty file or remove the relevant lines from the Dockerfile. None of the plug-ins are required for the purposes of this blog.

Now build and run the container, mapping in the Docker socket and binary.

1

2

3

4

5

6

7

$docker build-tmyjenk.

...

Successfully built471fc0d22bff

$docker run-d-v/var/run/docker.sock:/var/run/docker.sock\

-v$(which docker):/usr/bin/docker-p8080:8080myjenk

You should now have a running Jenkins container, accessible at http://localhost:8080 that is capable of running Docker commands. We can quickly test this out with the following steps:

Open the Jenkins home page in a browser and click the “create new jobs” link.

With any luck, you should now have a green (or blue) ball. If you click on the ball and select “Console Output”, you should see something similar to the following:

Great! We can now successfully run Docker commands in our Jenkins container. Be aware that there is a significant security issue, in that the Jenkins user effectively has root access to the host; for example Jenkins can create containers that mount arbitrary directories on the host. For this reason, it is worth making sure that the container is only accessible internally to trusted users and considering using a VM to isolate Jenkins from the rest of the host.

There are other options, principally Docker in Docker (DinD) and using HTTPS to talk to the Docker daemon. DinD isn’t really more secure due to the need to use a privileged mode container, but does avoid the need for sudo. The main disadvantage of DinD is that you don’t get to reuse the image cache from the host (although this may be useful if you want a clean environment for your test containers that is isolated from the host). Exposing the socket via HTTPS doesn’t require sudo and keeps using the host’s image, but is arguably the least secure due to the increased attack surface from opening a port.

I plan to take a more in-depth look at securely setting up Docker on an HTTPS socket in a later blog.

If you’d like to learn more about cloud native, grab a copy of the new book.

When using docker run inside the jenkins container with volumes, you are actually sharing a folder of the host, not a folder within the jenkins container. To make that folder “visible” to jenkins (otherwise it is out of your control), that location should have a parent location that matches the volume that was used to run the jenkins image itself.

That container will produce a binary in /target which ends up in /home/ernest/data/jenkins/workspace/artreyu/target on the host which will be available in the jenkins container at target/artreyu because of mounting its parent location.

Thanks for the article – confirmed what I’d been trying (and failing) to do. Hopefully you can help me even further! 🙂

I’m necessarily (for underlying system architectural reasons) adding an additional layer of abstraction that is causing me grief with the “add the jenkins user to the docker group” approach. My Jenkins master is running separately and is firing up a docker container for a jenkins slave on which it is then executing a docker build. The slave container doesn’t currently appear to be able to connect to the host’s docker engine with the docker client unless it is using sudo – which I understand is because it can’t connect to the unix socket. The socket is bound on the container (-v /var/run/docker.sock:/var/run/socker.sock) but I’m slightly confused about how access rights work on the bound socket.

I’m using a pipeline in Jenkins to execute the build and it calls docker from within the Jenkinsfile rather than executing it manually using sh. This keeps the script neater and means I can take advantage of the groovy features (no pun intended) that Cloudbees have made available with their Docker plugin. the problem is I can’t make it use sudo when executing docker commands so I either need to solve THAT problem, or I need to solve the docker executing without sudo problem in my jenkins slave container. Any ideas? 🙂 Any help would be greatly appreciated!

Regarding access rights for mounted sockets, they are just the same as they are on the host, but you have uid issue to deal with. So if you give the jenkins user access to the socket on the host, that doesn’t mean the jenkins user in the container has access, as they may or may not have the same uid.

The build of docker I am using on CentOS comes with a dynamically linked docker executable client. I found it necessary to install docker engine into my images, because simply mapping it in gave linker errors when running docker commands. I still map in the socket, and get the desired behavior of using the host system daemon, just minor image bloat.

Since Docker is now dynamically linked, it has dependencies on various libraries (check with ldd /usr/bin/docker)
We may see libapparmor.so.1 Not found. So we need to add this library to jenkins image.
Add RUN apt-get install -y libapparmor-dev to the Dockerfile can help 🙂

Yeah, I had the same problem recently. I ended up taking @Blake’s solution and installing Docker into the image, which is slightly annoying. However, it’s also a lot more futureproof and portable than copying in libs. The main issue is that the Docker client and engine can get out-of-sync.

When I’m accessing the Jenkins-container and try to perform a Docker-command I got this error: “error while loading shared libraries: libsystemd-jornal.so.0: cannot open shared object file: No such file or directory”

You’d have to download and install docker-compose in the Dockerfile, then make sure it’s on your path. I’m not sure about docker-machine, that’s going to cause some problems; you don’t want to be creating a VM inside a container…

I just found this page. The way I run DooD is that:
I created a user ‘jenkins’ on the host with a home folder (not the same folder as I use for the master volume mount).
I configured it’s authorized_keys with a newly generated key and put that key in the jenkins ssh credential configuration.
I then configured a new node as an SSH node with the IP of my host server.
I then set the master node as only for matching jobs so that they will prefer running on the node.
I added the jenkins user to the docker group.

Thus I have a running docker to build images, no need for passwordless sudo and the group does not have to be numerically baked into the Dockerfile.

This seems to work. I’m sure someone will come up with a myriad of reasons why this is a bad plan, so my question is what are they?

If it works for you, that’s great. To me it seems a lot of work to enable SSH and a security issue (although so is sharing the Docker socket). Another minor issue is that you need to make sure the UIDs of the Docker users match.

My goal is to have jenkins create images and push them onto an external registry.
While docker build command runs fine, I’m struggling to make docker-compose build command to work.
I keep getting a ” .IOError: [Errno 2] No such file or directory: u’./docker-compose.yml’ ” error.

Ha, well I guess it will work. Unfortunately I think most people will be entirely unhappy with this solution. The jenkins user is now effectively root on the host, it assumes that the UID of jenkins is the same in the container and on the host and is not portable between hosts.

At runtime docker will dynamically assign the GID passed on the -u to the user (i.e. jenkins) running the container process. This way your jenkins image remains portable and you don’t need to sudo your docker commands, thus allowing use of docker-workflow plugins and such. Note that I did install the docker binaries in the jenkins image but used the hosts docker.sock file, both windows and linux.

What do you think? Any drawbacks to this approach? Will reply to any feedback or questions.

Great post and Q&A!
Does anyone have experience running this approach with a more recent version of docker?
I am stuck with trying this approach on CentOS and docker 1.10.3, getting this error for docker load:

Background: we have been using this kind of Jenkins inside Docker approach (similar to the one described in the original post) for quite some time with docker 1.9.1 and Centos 7.2. Now for some reason we needed to upgrade to docker 1.10.3. The discussion here helped me to figure out that I need to install docker inside the Jenkins image and map only the socket, not the binary. I am using this command to start the Jenkins container:

All docker commands seem to work fine now, also with 1.10.3, except for the docker load (got error above). With 1.9.1 I did not have this problem.
Does anyone have a solution to this, or experience with the approach and even newer versions of docker? Thanks!

Thanks Adrian for this post and many others you have written regards to docker security. Completely share your security concerns about possible options to get this working.

We are running Jenkins alongside docker daemon “without” packaging it in a container. We have a constrain of using a proprietary privileged manager tool that does not allow password less sudo. In this case, enabling mutual TLS auth and connecting to docker daemon socket locally with HTTPs port seems to be the only plausible solution. In addition to this, we are not opening the docker daemon port for outside world through IP tables.

Is this a reasonable solution? Does running Jenkins in docker with everything else same, change anything in terms of security?

If you’ve thought about and are using TLS, I think you should be ok. I would make sure this all only exposed to the internal network or VPN.

At the end of the day, allowing Jenkins to run Docker means that anyone that can run Jenkins stuff pretty much has full access to the host. For this reason, you probably don’t want the host to be running anything sensitive alongside jenkins. Note that this is true regardless of how you expose the Docker socket.

Finally, since I wrote the article, Docker in Docker has come on a lot and might actually be a better solution now. I know play-with-docker.com uses DinD.

docker: Error response from daemon: Mounts denied: more info.
.
hs /usr/jenkins and /usr/local/bin/docker
are not shared from OS X and are not known to Docker.
You can configure shared paths from Docker -> Preferences… -> File Sharing.

Nice tutorial but I can’t run docker in the container: docker: error while loading shared libraries: libltdl.so.7: cannot open shared object file: No such file or directory
Any suggestions? Thanks in advance!!

as your steps it worked well for running a python web app which hosted on localhost:5000 with jenkins+gogs, one hour ago, I used DooD solution
but some mistakes appearred just now, prompted for “The connection was reset”, and nothing I did.