When it comes to (simple) web applications, then most of the time Docker is a perfect fit. However, as you begin to migrate your applications into Docker containers, you might ask yourself how to forward all the requests to the different containers. A Docker Reverse Proxy can help!

Virtual Hosts vs. Containers

In a classic setup without Docker you might have a web server like Apache or nginx. The web server is in charge of multiple websites and web applications, all separated by virtual hosts. The virtual hosts are based on the hostname (i.e.
ServerName in Apache or
server_name in nginx) and / or listen on different IP addresses.

All requests will be handled by this single web server, which will evaluate the
Host: header and forward the request to the desired virtual host.

When we look at Docker containers we realise that each virtual host is now a separate container. Every web application has its own container with its own web server instance. Instead of a single web server, we eventually have multiple web servers. Of course you can easily use different ports for each container / web server, but this isn’t very handy.

A reverse proxy for Docker containers

What you’re looking for

Instead of forwarding all the requests directly to the container, you should use a reverse proxy. The reverse proxy is listening for incoming HTTP(S) requests and forward them to your containers. However, if you use a default Docker setup, the IP addresses of your containers can change any time. So there are two options:

Give your containers fix IP addresses

Use a more dynamic proxy configuration

Even if there are benefits in fixed IP addresses, it isn’t the nature of Docker and you’ll lose a lot of other advantages. So let’s focus on the dynamic proxy configuration.

The thing you’re looking for is:

A reverse proxy process.

A process which “knows” your web application containers.

A process which updates your reverse proxy with the correct configuration.

Let’s focus on the simple part first, the reverse proxy.

nginx

There are a lot of different options out there for reverse proxying (e.g. Squid, Apache, nginx). I’m a big fan of nginx, because it’s easy to configure and it’s fast! So I always use the official nginx Docker image.

Of course it’s only a generic nginx image, so we need to provide it an nginx configuration. Because we don’t want to overwrite the default nginx config, we mount the nginx
conf.d directory into the docker container. Of course we also use HTTPS (SSL), so we need some certs as well:

nginx volume

1

2

/var/lib/docker/data/proxy/conf.d:/etc/nginx/conf.d:ro

/var/lib/docker/data/proxy/certs:/etc/nginx/certs:ro

Unfortunately, the
conf.d directory is empty right now but we’ll soon provide a configuration in the next chapter. Please also make sure your HTTP
80 and HTTPS ports
443 are properly forwarded to the nginx container.

docker-gen

A guy called jwilder built a really nice Docker image which does some magic. docker-gen “knows” your containers and will render a configuration file based on a template. However, docker-gen needs to have read access to your Docker socket, because it needs to monitor the start and stop of containers.

So we need to mount 3 different volumes into this Docker container:

docker-gen volumes

1

2

3

/var/run/docker.sock:/tmp/docker.sock:ro

/var/lib/docker/data/proxygen:/templates

/var/lib/docker/data/proxy/conf.d:/conf

Before docker-gen can do anything you need to feed it with a Go template. Here’s my nginx template:

proxy.tmpl

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

{{define"upstream"}}

{{if.Address}}

{{/* If we got the containers from swarm and this container's port is published to host, use host IP:PORT */}}

{{ifand.Container.Node.ID.Address.HostPort}}

# {{ .Container.Node.Name }}/{{ .Container.Name }}

server{{.Container.Node.Address.IP}}:{{.Address.HostPort}};

{{/* If there is no swarm node or the port is not published on host, use container's IP:PORT */}}

This template will create a configuration file for an nginx reverse proxy. The nice thing about docker-gen and this template is:

docker-gen “knows” your containers

docker-gen will create an upstream / server for each container with a
VIRTUAL_HOST environment variable

docker-gen will re-create the config each time you stop / start a container

The only thing you need to do is providing docker-gen the template and the path for the rendered config. You can do that by specifying these command arguments:

docker-gen command

1

-watch-notify-sighup=proxy/templates/proxy.tmpl/conf/proxy.conf

Run the container and docker-gen will now create a
/conf/proxy.conf based on the
/templates/proxy.tmpl template. When the template has changed, docker-gen will also send a SIGHUP to the
proxy container.

Please read the docs on the Docker Hub for more informations about docker-gen. There are already other nginx configuration templates available. However, I needed to modify mine a bit because I use web sockets in one of the containers.

There’s also a nginx proxy available on Docker Hub, which combines docker-gen and nginx in one container. However, from a security point of view, I don’t recommend to mount the critical docker socket directly into a public available Docker container 😉

Customise the paths of the volumes for your own needs, add the certs to the
certs/ directory and make sure the
proxy.tmpl exists in the
templates/ directory. Then run the containers by executing:

docker-compose

Shell

1

docker-compose[-fCOMPOSE-FILE.yml]up-d

Connect to your host via HTTP and HTTPS and check if you get a response. You should get a HTTP 503 response which is fine!

Adding upstream servers

Now it gets magic 🙂

When you start a new container you can easily add the following environment variables:

VIRTUAL_HOST sets the virtual hostname of your service

VIRTUAL_PORT is optional and sets the HTTP(S) port of your service

VIRTUAL_PROTO is optional and sets the protocol of your service (
http or
https)

Whenever you start a container with the
VIRTUAL_HOST environment variable, the proxy container will forward all requests belonging to this hostname to your container. By default http and the exposed port of your container will be used. However, you can override that by setting the additional environment variables.

A nice test environment

If you use the configuration above you can easily setup a web test environment based on Docker for your own needs. You only have to make sure that you’ve a subdomain which points at your Docker host.

Let’s say you docker host is called
docker.confirm.ch and you want all your containers in the
testing.confirm.ch subdomain:

DNS example config

1

2

docker.confirm.ch.INA1.2.3.4

*.testing.confirm.ch.INCNAME docker.confirm.ch.

Now you can start multiple Docker containers, all with a
VIRTUAL_HOST in the subdomain
*.testing.confirm.ch. Via DNS you make sure that all requests land on
docker.confirm.ch and the nginx forwards the requests to your containers.

To make everything more secure you can completely disable HTTP and create a wildcard SSL certificate for your subdomain.