Contact me

Docker SSH Tunnel

2017, Feb 28

Sometimes when you are developing locally you might need access to a remote system running a docker container, which has closed all of its ports for security reasons (which is something I recommend strongly as it really reduces the surface area for security breaches and attacks). So how can we get access to the production data in our own local Docker stack?

Often the configuration of an app, consisting of several dockerized services, will be described with a docker-compose.yml file and the internal addressing will be with the service names on the docker virtual network. Something like that:

In order to be able to talk to a production service running on a remote Docker engine, we need to somehow treat it as if it is running in our leandot-net-local network. A good way to do that is to run a container, whose only task is to tunnel all traffic sent to itself over to the production system. Let’s create such a container, which will tunnel to a production Redis system.

I assume that you can SSH to the remote system with your SSH key passwordlessly, if not please set it up first.

Make sure that we can tunnel traffic through the instance running the remote docker engine. Check that /etc/ssh/sshd_config does not contain something like AllowTcpForwarding no.

Find out the internal IP of the redis container on production. Run the following on production

docker inspect YOUR_REDIS_INSTANCE_ID | grep IPAddress

and you will get the address, which we will refer to as CONTAINER_IP later on.

Now comes a tricky part. If you simply mount your private SSH key in a container, it won’t have the correct permissions and SSH will complain. So we need to mount the file and change its permissions in the container. This is something where we can use bindfs. The clever folks at cardcorp have already solved the issue so let’s borrow what we need and quickly explain what is going on. Here is the Dockerfile we are going to use:

If all is configured correctly and you start the container locally with

docker-composeup

you should be able to establish a tunnel to the new system.

You should be able to connect to the production redis docker container, even though it is completely shielded from the outside world and your connection will be securely encrypted, thanks to the ssh connection. Not bad! If you have an application that normally connects to your local redis, you can change the name to redis_ssh_tunnel and you will be reading data from production. To avoid actually writing to a production system unintentionally it might make sense to split the connections that read from those that write and only swap the name for the reading one.