Welcome back! In the last post, we got Docker Swarm mode installed and up and running. Now, we're delving into creating services and getting them deployed on your swarm.

Creating a Service

With Docker Swarm Mode, a service is a long-running Docker container that can be deployed to any node worker. It’s something that either remote systems or other containers within the swarm can connect to and consume.

For this example, we’re going to deploy a Redis service.

Deploying a Replicated Service

A replicated service is a Docker Swarm service that has a specified number of replicas running. These replicas consist of multiple instances of the specified Docker container. In our case, each replica will be a unique Redis instance.

To create our new service, we’ll use the docker command while specifying the service create options. The following command will create a service named redis that has 2 replicas and publishes port 6379 across the cluster.

In addition to specifying the service create options, we also used the --name flag to name the service redis and the --replicas flag to specify that this service should run on 2 different nodes. We can validate that it is in fact running on both nodes by executing the docker command with the service ls options.

From the connection above, we were successful in connecting to the redis service. This means our service is up and available.

How Docker Swarm Publishes Services

When we created the redis service, we used the --publish flag with the docker service create command. This flag was used to tell Docker to publish port 6379 as an available port for the redis service.

When Docker publishes a port for a service, it does so by listening on that port across all nodes within the Swarm Cluster. When traffic arrives on that port, that traffic is then routed to a container running for that service. While this concept is pretty standard when all nodes are running a service’s container, this concept gets interesting when we have more nodes than we do replicas.

To see how this works, let’s add a third node worker to the Swarm Cluster.

Adding a Third Node Worker into the Mix

To add another node worker, we can simply repeat the installation and setup steps in the first part of this article. Since we already covered those steps, we’ll skip ahead to the point where we have a three-node Swarm Cluster. We can once again check the status of this cluster by running the docker command.

With replicated services, Docker Swarm’s goal is to ensure that there is a task (container) running for every replica specified. When we created the redis service, we specified that there should be 2 replicas. This means that even though we have a third node, Docker has no reason to start a new task on that node.

At this point, we have an interesting situation: We have a service that’s running on 2 of the 3 Swarm nodes. In a non-Swarm world, that would mean the redis service would be unavailable when connecting to our third Swarm node. With Swarm Mode however, that is not the case.

Connecting to a Service on a Non-Task-Tunning Worker

Earlier when I described how Docker publishes a service port, I mentioned that it does so by publishing that port across all nodes within the Swarm. What’s interesting about this is what happens when we connect to a node worker that isn’t running any containers (tasks) associated with our service.

Let’s take a look at what happens when we connect to swarm-03 over the redis published port.

What’s interesting about this is that our connection was successful. It was successful despite the fact that swarm-03 is not running any redis containers. This works because internally Docker is rerouting our redis service traffic to a node worker running a redis container.

Docker calls this ingress load balancing. The way it works is that all node workers listen for connections to published service ports. When that service is called by external systems, the receiving node will accept the traffic and internally load balance it using an internal DNS service that Docker maintains.

So even if we scaled out our Swarm cluster to 100 node workers, end users of our redis service can simply connect to any node worker. They will then be redirected to one of the two Docker hosts running the service tasks (containers).

All of this rerouting and load balancing is completely transparent to the end user. It all happens within the Swarm Cluster.

Making Our Service Global

At this point, we have the redis service set up to run with 2 replicas, meaning it’s running containers on 2 of the 3 nodes.

If we wanted our redis service to consist of an instance on every node worker, we could do that easily by modifying the service’s number of desired replicas from 2 to 3. This would mean however that with every node worker we add or subtract, we will need to adjust the number of replicas.

We could alternatively do this automatically by making our service a Global Service. A Global Service in Docker Swarm Mode is used to create a service that has a task running on every node worker automatically. This is useful for common services such as Redis that may be leveraged internally by other services.

To show this in action, let’s go ahead and recreate our redis service as a Global Service.

We can see that when the service was created as a Global Service, a task was then started on every node worker within our Swarm Cluster.

Summary

In this article, we not only installed Docker Engine, we also set up a Swarm Cluster, deployed a replicated service, and then created a Global Service.

In a recent article, I not only installed Kubernetes, I also created a Kubernetes service. In comparing the Docker Swarm Mode services with the Kubernetes services, I personally find that Swarm Mode services were easier to get set up and created. For someone who simply wishes to use the “services” features of Kubernetes and doesn’t need some of its other capabilities, Docker Swarm Mode may be an easier alternative.