We’ve updated the Blockbridge Volume Driver with new support for Docker Swarm. This update makes it simple to deploy and scale the volume driver in a swarm deployment, using Docker Compose. Additionally, we’ve introduced helper scripts that enable you to create a swarm for development and testing.

Background

Docker Swarm is a Docker-native clustering solution. It allows you to schedule applications to run on multiple hosts, called swarm “nodes”.. Constraints, affinities, and failover of applications are all possible. By pointing the Docker command line at the swarm master, operations on images, volumes and containers work across the swarm with commands you are familiar with. Additionally, Docker Compose continues to work as expected.

Blockbridge Volumes

Blockbridge volumes are multi-host aware. This means that any volume is accessible from any node in the swarm. No matter where an application runs, its data volume is always available. An application container that fails over or moves from one node to another will cause the volume to attach locally to the node where the application is running. No data copies required.

In a swarm, a volume create operation is broadcast across all nodes. However, because Blockbridge volumes are not tied to any one particular node, only one Blockbridge volume is created. Similarly, when a volume is removed in the swarm, this broadcast operation is interpreted by Blockbridge, and the single volume is removed.

To enable Blockbridge volumes in a swarm environment, the Blockbridge volume driver must be running on each of the swarm nodes. Because the Blockbridge volume driver runs as a container, this is as simple as scaling the driver with Docker Compose up to the number of nodes in the swarm. By default, the volume drivers will automatically discover a Blockbridge simulator that is running in the swarm, and configure themselves. This makes it easy to get a development or test environment up and running quickly with the default compose files.

Creating a Docker Swarm

Docker Machine can be used to configure a swarm. If you already have a swarm set up, you can skip to the Simulator Setup section. If you do not already have a swarm, a script in the Blockbridge simulator repository can help you create one.

If you have not already done so, clone the Blockbridge simulator repository. Then, run the init-swarm.sh script. By default the script will create a swarm using virtualbox. If you want to use another driver (e.g., OpenStack), specify the driver type and an environment file that sets environment variables to configure Docker Machine for that driver. The init-swarm.sh script will create a machine for the keystore, the swarm master and a number of nodes. The machines are named based on the current user ID. Once the swarm is setup successfully, point your Docker command line to the swarm master.

The next sections all assume your Docker command line is pointing at the swarm master.

Simulator Setup for Swarm

Next, we’ll configure and run the Blockbridge simulator in the swarm. This requires two pieces of information: the swarm node constraint that specifies where to run the simulator and the external IP address of that node. Pass this information to the compose file in environment variables. The required fields are: BB_SIM_NODE and BB_SIM_IP.

The simulator repository contains a compose file specifically for swarm: docker-compose-swarm.yml. Use it to run the Blockbridge simulator on the swarm-master node:

This will start the simulator, configure an overlay discovery network, and expose ports so that the simulator is accessible by volume drivers, management tools, and storage clients.

Scaling the Volume Driver

The Blockbridge volume driver can scale up to the number of nodes in the swarm. Each instance of the driver will automatically discover and connect to the Blockbridge simulator and immediately start to provide service.

Using Blockbridge Volumes

Once the volume driver is running on each node in the swarm, you can create and use Blockbridge volumes the same as you would in a non-swarm setup. Now, any application that consumes the volume can run on any node in your container datacenter!