Remote docker deployment done securely

Did you follow my post Running WordPress using Docker or have you installed any docker containers directly on a docker host?
Do you find it painful to copy the setup and login to the Docker host every time you change something?Of course you do.
Today I tell you about secure docker deployment thus you can avoid this pain in the future.

Since TLS connections can be made via IP address as well as DNS name, the IP addresses need to be specified when creating the certificate. For example, to allow connections using your FQDN (
$HOST)
10.10.10.20 and
127.0.0.1, specify the following:

# If you need Docker to use an HTTP proxy, it can also be specified here.

#export http_proxy="http://127.0.0.1:3128/"

# This is also a handy place to tweak where Docker's temporary files go.

#export DOCKER_TMPDIR="/mnt/bigdrive/docker-tmp"

But wait, what does the top of the file say:
# THIS FILE DOES NOT APPLY TO SYSTEMD…

Wtf you might think: Isn’t Debian 9 using systemd? It is and there are tons of potential solutions for this around. Most of these don’t work.

What we are going to do is this: you will extend the systemd configuration in a clean and supported way to use the
$DOCKER_OPTS environment variable which is set in
/etc/default/docker. Thus we do not change the supplied docker.service configuration file but add a configuration file to the systemd drop-in directory for docker. This will overwrite the standard configuration optionsduring runtime respectively. (The drop-in directory
/etc/systemd/system/docker.service.d might not exists so you have to create it.)

Create the file
/etc/systemd/system/docker.service.d/docker.conf with the following contents:

Warning: As shown in the example above, you don’t need to run the
docker client with
sudo or the
docker group when you use certificate authentication. That means anyone with the keys can give any instructions to your Docker daemon, giving them root access to the machine hosting the daemon. Guard these keys as you would a root password!

Secure Docker deployment by default

If you want secure Docker client connections by default, you can move the files to the
.docker directory in your home directory – and set the
DOCKER_HOST and
DOCKER_TLS_VERIFY variables as well (instead of passing
-H=tcp://$HOST:2376 and
--tlsverify on every call). Docker will use the certificates and keys in the .docker directory automatically.

1

2

3

4

$mkdir-pv~/.docker

$cp-v{ca,cert,key}.pem~/.docker

$export DOCKER_HOST=tcp://$HOST:2376 DOCKER_TLS_VERIFY=1

Integrated workflow for secure Docker deployment remote

If you have a development workflow: DEV-TEST-PROD, you cannot really use the above setup because you will have different certificates and keys for TEST und PROD. You will have to use the long command line version every time. Not cool.

I have a solution for you: set environment variables e.g. prod, test, … like this: