Rolling upgrades?

In the previous post we deployed our Swarm service on the public cloud, Amazon EC2.

In this post, we will play a bit with Rolling upgrades, and use some visualization tools to see what's going on our Docker cluster.

Let's go!

Docker Visualizer

Before we install the Visualizer, let me explain the configuration of my "cluster":
it is currently composed of 4 machines with docker installation, called stratus-clay, swarm1, swarm2 and swarm3 for this exercise. I'll be using stratus-clay as the manager most of the time, unless stated otherwise, and swarm1, swarm2 and swarm3 as the workers (But remember, the manager can be a worker too ;))

If you are lucky, you'll notice that the Swarm is starting the image (pulling/deploying it):

and then move into a running state:

curl http://stratus-clay:9000

Hello, Docker world from 6c8d8e744896.

Let's do a first benchmark again, and see how performant is our service:

ab -n 100000 -c 30 http://stratus-clay:9000/ | grep "#/sec"

Requests per second: 2613.28 [#/sec] (mean)

Scaling our Greeters

Let's scale our service on 5 instances:

And perform our benchmark again:

ab -n 100000 -c 30 http://stratus-clay:9000/ | grep "#/sec"

Requests per second: 5488.99 [#/sec] (mean)

Not bad, for just performing some docker commands :)

Upgrading our Greeters

Ok, let's imagine that we want to deploy a newer (and of course better) image of our service. But of course we don't want our service to go offline. We would like to do a rolling upgrade, meaning that we want to upgrade our cluster, with no downtime for end users.

That also implies, that while our service is getting upgraded on some nodes, some other nodes are still handling user traffic. Our cluster will have potentially at the same time 2 versions of our service!

Enough talk. Let's see that in action now.

To test our service, let's simulate a user request, wanting a result in under 3 seconds, in an endless loop:

Hello, Docker world from 9b2c079a4630.
Hello, Docker world from adec837d8169.
Hello, Docker world from ada0e1f1550f.
Hello, Docker world from 62dc563e75fa.
Hello, Docker world from 6c8d8e744896.
Hello, Docker world from 9b2c079a4630.
Hello, Docker world from adec837d8169.
Hello, Docker world from ada0e1f1550f.
Hello, Docker world from 62dc563e75fa.
Hello, Docker world from 6c8d8e744896.
....

Here we see again the load balancing on different containers.

Let's upgrade the service with a new image which we built. The service will just reply with:

Hello New Docker world from <hostname>.

Keep the while loop going, and execute in another shell:

docker service update --image jmkhael/myservice:0.0.2 greeters

The upgrade procedure start by isolating one container at a time, upgrading it, then move to the next. We can also configure parallelism (but we won't :p)

The default upgrade config policy, makes it that the scheduler applies rolling updates as follows by default:

Stop the first task. (aka. container)

Schedule update for the stopped task.

Start the container for the updated task.

If the update to a task returns RUNNING, wait for the specified delay period then stop the next task.