Introduction

RabbitMQ provides, among other features, clustering capabilities. Using clustering, a group of properly configured hosts will behave the same as a single broker instance.

All the nodes of a RabbitMQ cluster share the definition of vhosts, users, and exchanges but not queues. By default they physically reside on the node where they have been created, however as from version 3.6.1, the queue node owneriship can be configured using Queue Master Location policies . Queues are globally defined and reachable, by establishing a connection to any node of the cluster.

Modern architectures often involve container based ways of scaling such as Docker . In this post we will see how to create a dynamic scaling RabbitMQ cluster using CoreOS and Docker :

We will take you on a step by step journey from zero to the cluster.

Get ready

We are going to use different technologies, although we will not get into the details of all of them. For instance it is not required to have a deep CoreOS/Docker knowledge for the purpose of executing this test.

In order to scale up the node above, we should run another container with --link parameter and execute rabbitmqctl join_cluster [email protected] . In order to scale down we should stop the second container and execute rabbitmqctl forget_cluster_node [email protected] .

Turn into more positive. e.g. This is one of the areas where further enhancements on automation would be helpful.

We need docker orchestration to configure and manage the docker cluster. Among the available orchestration tools, we have chosen Docker swarm .

Same to the other nodes, you have more or less the same number of containers.

Let’s see now in detail the docker service parameters:

Command

Description

docker service create

Create a docker service

--name rabbitmq-docker-service

Set the service name, you can check the services list using docker service ls

-p 15672:15672 -p 5672:5672

map the RabbitMQ standard ports, 5672 is the AMQP port and 15672 is the Management_UI port

--network rabbitmq-network

Choose the docker network

-e RABBITMQ_ERLANG_COKIE='ilovebeam'

Set the same erlang.cookie value to all the containers, needed by RabbitMQ to create a cluster. With different erlang.cookie it is not possible create a cluster.

Next are the auto-cluster parameters:

Command

Description

-e AUTOCLUSTER_TYPE=etcd

set the service discovery backend = etcd

-e ETCD_HOST=${COREOS_PRIVATE_IPV4}

The containers need to know the etcd2 ip. After executing the service you can query the database using the command line etcdctl ex: etcdctl ls /rabbitmq -recursive or using the http API ex: curl -L http://127.0.0.1:2379/v2/keys/rabbitmq

-e ETCD_TTL=30

Used to specify how long a node can be down before it is removed from etcd’s list of RabbitMQ nodes in the cluster

-e AUTOCLUSTER_CLEANUP=true

Enables a periodic check that removes any nodes that are not alive in the cluster and no longer listed in the service discovery list.

Scaling down removes one or more containers, the nodes will be removed from etcd database, see, for example: docker service scale rabbitmq-docker-service=4

-e CLEANUP_WARN_ONLY=false

If set, the plugin will only warn about nodes that it would cleanup. AUTOCLUSTER_CLEANUP requires CLEANUP_WARN_ONLY=false to work.

gsantomaggio/rabbitmq-autocluster

The official docker image does not support the auto-cluster plugin, in my personal opinion they should. I created a docker image and registered it on docker-hub.

AUTOCLUSTER_CLEANUP to true removes the node automatically, if AUTOCLUSTER_CLEANUP is false you need to remove the node manually.

Scaling down and AUTOCLUSTER_CLEANUP can be very dangerous , if there are not HA policies all the queues and messages stored to the node will be lost. To enable HA policy you can use the command line or the HTTP API, in this case the easier way is the HTTP API, as:

Note: Enabling the mirror queues across all the nodes could impact the performance, especially when the number of the nodes is undefined. Using "ha-mode":"exactly","ha-params":3 we enable the mirror only for 3 nodes. So scaling down should be done for one node at time, in this way RabbitMQ can move the mirroring to other nodes.

Conclusions

RabbitMQ can easily scale inside Docker, each RabbitMQ node has its own files and does not need to share anything through the file system. It fits perfectly with containers.

This architecture implements important features as:

Round-Robin connections

Failover cluster machines/images

Portability

Scaling in term of CoreOS nodes and RabbitMQ nodes

Scaling RabbitMQ on Docker and CoreOS is very easy and powerful, we are testing and implementing the same environment using different orchestration tools and service discovery tools as kubernetes, consul etc, by the way we consider this architecture as experimental .

Here you can see the final result:

Enjoy!

At Erlang Solutions we can help you design, implement, operate and optimise a system utilising RabbitMQ. We provide tier 3 (most advanced) level RabbitMQ support for Pivotal`s customers and we work closely with Pivotal support tier 1 and 2. We also offer RabbitMQ customisation if your system goes beyond the typical requirements, and bespoke support for such implementations.