This article offers a step-by-step guide on setting up a load-balanced service deployed on Docker containers using OpenStack VMs. The installation consists of an Nginx load balancer and multiple upstream nodes located in two deployments.

The issue

Let’s imagine that we plan to deploy an application that is expected to be heavily used. The assumption is that a single server can’t handle all incoming requests. We need multiple instances. Moreover the application we launch runs complex calculations that fully utilizes the server for a long time. Therefore the server instance can’t meet expected performance requirements. Or we simply want to deploy an application across multiple instances to ensure that if one instance fails, we still have another instance operating.

Docker allows us to easily and efficiently run multiple instances of the same service. Docker containers are designed to be able to build up a container very quickly on the VM regardless the underlying layers.

Nevertheless, such installation allows us to run multiple containers as separate objects. When building such infrastructure, it is desirable to keep all the instances available over one URL. The requirement can be satisfied by the usage of the load balancer node.

Our goal

Our goal is to set up an installation that has an Nginx reverse proxy server at the front and a set of upstream servers handling the requests. The Nginx server is the one directly communicating with clients. Clients don’t receive any information about particular upstream server handling their requests. The responses appear to come directly from the reverse proxy server.

In addition to this functionality, Nginx has health checks. The checks ensure that nodes behind the load balancer are still operating. In case of one of the servers is not responding, Nginx stops forwarding requests to the failed node.

How to build the infrastructure

First, we need to create Docker hosts in which we can run the containers. If you are not familiar with Docker, we recommend reading more about launching Docker hosts and containers in our previous article Docker containers on OpenStack VMs first. In this entry, we launch two OpenStack VMs running Ubuntu 14.04 on different deployments. The easiest way to launch the VMs is to use the Dashboard.

The VMs being launched should have a public IP address in order to be directly addressable over the internet. Then, we need to keep the TCP port 2376 open for communication in order to run with Docker. Additionally, the TCP port 80 (HTTP) needs to be open in order to access the load balancer and ports 8080 and 8081 so that the reverse proxy server can reach the upstream servers that will be accessible on that ports.

Once the VMs are launched and operating, we make them Docker hosts with docker-machine using the generic driver. We can do so by executing this command from the local environment for both VMs:

Now, we can run the containers on the hosts. We launch the upstream nodes and the reverse proxy. In this article, tutum/hello-world was chosen as an image for running upstream nodes because it enables us to differentiate the particular containers from one another. We launch two containers from the hello-world image on both VMs. Moreover, we launch the load balancer on one of the hosts. There is an image called nginx where the Nginx is already running. First on instance lb1:

How to configure the reverse proxy server

In the current state we have an Nginx welcome page accessible at 1.2.3.4 and four instances of hello-world web application, all with slightly different content at 1.2.3.4:8080, 1.2.3.4:8081, 5.6.7.8:8080, and 5.6.7.8:8081. Now, we have to configure the Nginx node to become a load balancer and a reverse proxy server.

Now, let’s configure the Nginx node to become a load balancer and a reverse proxy server. The configuration file we want to create in the container is /etc/nginx/conf.d/default.conf. We can execute a script on a container using the docker exec command. The convenient way is to establish a new shell session to the container. The nginximage has the Bash at /bin/bash

# eval $(docker-machine env lb1)
# docker exec -it nginx1 /bin/bash

Executing this command creates a Bash session so that we can insert the desired content in the configuration file.

The upstream servers section specifies four upstream servers that we have created before. They will be accessed over given URLs. Because no other approach was specified, the default round-robin load balancing is used.

The server section specifies the function of the node as a load balancer specifying the proxy_pass as the group of servers defined above. The server is listening on the port 80.

Be aware that the port 80 does not correspond with the port 1.2.3.4:80 that the server is accessible at. If other ports than -p 80:80 were published while starting the container, it could have changed this directive. For example if -p 8080:8081 was used, then we need to access the port 8081 on 1.2.3.4:8080 and we have to use listen 8081; in the configuration file.

The configuration file is properly configured now, so we can terminate the Bash session:

# exit

Back in our local environment, we restart the container so that the changes to the configuration file will be loaded and thus the node begins to operate as a load balancer.

# docker restart nginx1

Review the infrastructure

The system is running now, so we can check its functionality. We can type http://1.2.3.4 to our browser and see the hello-world app content. When we do it again, the displayed hostname changes. That means that another upstream server has responded to our request.

Simulate a failover

Let’s test the health checking feature. First, we stop one of the containers:

# docker stop con1

Then access http://1.2.3.4 several times and check the hostnames that are being displayed – there are only three different hostnames alternating. This means that the stopped container does not receive the requests.

Now we start the container again:

# docker start con1

Conclusion

Nginx is an efficient way to perform load balancing in order to provide fail-over, increase the availability, extend fleet of the application servers, or to unify the access point to the installation. Docker containers allow quickly spawn multiple instances of the same type on various nodes. Combined together, it is an easy and powerful mechanism for solving similar challenges.

And after a short delay, you should see four different hosts responding again.
This post first appeared on the Cloud&Heat blog. Superuser is always interested in community content, email: [email protected]