Give Codeship a try

Want to learn more?

One of the things I love about Docker, and also one of the things that enabled its success, is that the batteries are included.

What do I mean? Basically, to get started with Docker, you can just install and use it. Nothing more is needed, and complex things like network, process, and filesystem isolation are all working out of the box.

But after some time, you’ll probably start to feel like doing more — custom networking, custom IP address reservation, distributing files, and so on. These needs kick in when you start using Docker in production or when you’re preparing for that next step.

Fortunately, the batteries aren’t just included with Docker, they’re also swappable. How? With Docker plugins!

What Are Docker Plugins?

Docker plugins are out-of-process extensions which add capabilities to the Docker Engine.

This means that plugins do not run within the Docker daemon process and are not even children processes of the Docker daemon. You start your plugin wherever you want (on another host, if you need) in whichever way you want. You just inform the Docker daemon that there’s a new plugin available via Plugin Discovery (we’ll explore this topic in a bit).

Another advantage of the out-of-process philosophy is that you don’t even need to rebuild the Docker daemon to add a plugin.

Authorization (authz)

This capability allows your plugins to control authentication and authorization to/for the Docker Daemon and its remote API. Authorization plugins are used when you need to have authentication or a more granular way to control who can do what against the daemon.

VolumeDriver

The VolumeDriver capability basically gives plugins control over the volumes life cycle. A plugin registers itself as a VolumeDriver plugin and when the host requires a volume with a specific name for that Driver. The plugin provides a Mountpoint for that volume on the host machine.

VolumeDriver plugins can be used for things like distributed filesystems and stateful volumes.

NetworkDriver

NetworkDriver plugins extend the Engine acting as remote drivers for libnetwork. This means that you can act on various aspects from the Network itself (VLAN, bridges) through its connected endpoints (veth pairs and similar) and sandboxes (network namespaces, FreeBSD Jails, and so on).

IpamDriver

IPAM stands for IP Address management. IPAM is a libnetwork feature in charge of controlling the assignment of IP addresses for network and endpoint interfaces. IpamDriver plugins are very useful when you want to apply custom rules for a container’s IP address reservation.

What Did We Do Before Plugins?

Before Docker 1.7 when the plugin mechanism wasn’t available, the only way to take control over the daemon was to wrap the Docker remote API. A lot of vendors did this; they basically wrapped the Docker Remote API and exposed their API acting like a real Docker daemon while doing their specific things.

The problem with this approach is that you end up in composition hell. For instance, if you had to run two plugins, which one was the first to be loaded?

As I said, plugins run out of the main Docker daemon process. This means that the Docker daemon needs to find a way to speak with them. To solve this communication problem, each plugin has to implement an HTTP server which can be discovered by the Docker daemon. This server exposes a set of RPCs issued as HTTP POSTs with JSON payloads. The set of RPC calls that the server needs to expose is defined by the protocol that the server is going to implement (authz, volume, network, ipam).

Plugin Discovery Mechanism

Okay, but what do you mean by “an HTTP server which can be discovered by the Docker daemon”?

Docker has a few ways to discover a plugin’s HTTP server. It will always first check for Unix sockets in the /run/docker/plugins folder. For example, your plugin named myplugin would write the socket file in this location: /run/docker/plugins/myplugin.sock

After looking for sockets, it will check for specification files under the /etc/docker/plugins or /usr/lib/docker/plugins folders.

There are two types of specification files that can be used:

*.json

*.spec

JSON specification files (*.json)

This kind of specification file is just a *.json file with some information in it:

Name: the current name of the plugin used for discovery

Addr: the address at which the server can be actually reached

TLSConfig: this is optional; you need to specify this configuration only if you want to connect to an HTTP server over SSL

Plain text files (*.spec)

You can use plain text files with the *.spec extension. These files can specify a TPC socket or a UNIX socket:

tcp://127.0.0.50:8080

unix:///path/to/myplugin.sock

Activation Mechanism

The lowest common denominator among all the protocols is the plugin’s activation mechanism. This mechanism enables Docker to know which protocols are supported by each plugin. When necessary, the daemon will make a request call against the plugin’s /Plugin.Activate RPC that must respond with a list of its available protocols:

{
"Implements": ["NetworkDriver"]
}

Available protocols are:

authz

NetworkDriver

VolumeDriver

IpamDriver

Each protocol provides its own set of RPC calls in addition to the activation call. For this post, I decided to deepen the VolumeDriver plugin protocol. We’ll enumerate the VolumeDriver.* RPCs, and we will practically write a “Hello World” volume driver plugin.

Error Handling

The plugins must provide meaningful error messages to the Docker daemon so it can give them back to the client. Error handling is done via the response error form:

{
"Err": string
}

That should be used along with the HTTP error status codes 400 and 500.

VolumeDriver Protocol

The VolumeDriver protocol is both simple and powerful. The first thing to know is that during the handshake (/Plugin.Activate), plugins must register themselves as VolumeDriver.

{
"Implements": ["VolumeDriver"]
}

Any VolumeDriver plugin is expected to provide writable paths on the host filesystem.

The experience while using a VolumeDriver plugin is very close to the standard one. You can just create a volume using your volume driver by specifying it with the -d flag:

docker volume create -d=myplugin --name myvolume

Or you can start a container while creating a volume using the normal -v flag along with the --volume-driver flag to specify the name of your volume driver plugin.

Writing a “Hello World” VolumeDriver plugin

Let’s write a simple plugin that uses the local filesystem starting from the /tmp/exampledriver folder to create volumes. In simple terms, when the client requests a volume named myvolume, the plugin will map that volume to the mountpoint /tmp/exampledriver/myvolume and mount that folder.

The VolumeDriver protocol is composed of a total of seven RPC calls plus the Activation one which are:

/VolumeDriver.Create

/VolumeDriver.Remove

/VolumeDriver.Mount

/VolumeDriver.Path

/VolumeDriver.Unmount

/VolumeDriver.Get

/VolumeDriver.List

For each one of these RPC actions, we need to implement the corresponding POST endpoint that must return the right JSON payload. You can read the full specification here.

Fortunately a lot of work has already been done by the docker/go-plugin-helpers project, which contains a set of packages to implement Docker plugins in Go.

Since we’re going to implement a VolumeDriver plugin, we need to create a struct that implements the volume.Driver interface of the volume package. The volume.Driver interface is defined as follows:

This function is called each time a client wants to create a volume. What’s going on here is really simple. After logging the fact that the command has been called, we lock the mutex so we are sure that there will be nobody else performing actions on the volumes map. The mutex is automatically released when the execution exits the scope.

Then check if the volume is already present. If so, we just return an empty response that means that the volume is available. If the volume is not yet available, we create a string with its mountpoint, check if the directory is writable, and add it to the volumes map. We return an empty response for success or a response with an error if the directory is not writable.

The plugin does not automatically handle directory creation (it could do it easily) — the user has to do it manually.

A volume plugin must provide a list of the volumes registered with the plugin itself. This function basically does that — it cycles through all the volumes and puts them in a list that is returned as response.

This is called when the client asks the Docker daemon to remove a volume. The first thing we do here is to lock the mutex as we are operating on the volumes map, and then we delete that volume from it.

There are a few circumstances when Docker needs to know what the Mountpoint is of a given volume name. That’s what this function does — it takes a volume name and gives back the Mountpoint for that volume.

This is called once per container stop. Here, we just look into the volumes map for the requested volume name and return the Mountpoint so that Docker can use it.

In this example, implementing this function is the same as the Path function. In a real plugin, the Mount function may want to do a few more things, like allocating resources or requesting remote filesystems for such resources.

Unmount

This function is called once per container stop when Docker is no longer using the volume. Here we don’t do anything. A production-ready plugin may want to de-provision resources at this point.

Server

Now that our driver is ready, we can create the server that will serve our Unix socket for the Docker daemon. The empty for loop is here so that the main function becomes blocking since the server will go in a separate goroutine.

It is recommended that you start your plugins before starting the Docker daemon and stop them after stopping the Docker daemon. I usually follow this advice in production, but on my local testing environment, I usually test plugins inside containers so I have no other choice than starting them after Docker.

Using your plugin

Now that the plugin is up and running, we can try using it by starting a container and specifying the volume driver. Before starting the container, we need to create the myvolumename under the /tmp/exampledriver mountpoint.

Available Plugins

Flocker: This plugin basically allows your volumes to “follow” your containers, enabling you to run stateful containers for things that need a consistent state like databases.

Netshare plugin: I use this to mount NFS folders inside containers. It also supports EFS and CIFS.

Weave Network Plugin: This enables you to see containers just as though they were plugged into the same network switch independently of where they are running.

Now you know that the plugin API is available and that you can benefit from it by writing your own plugins. Yay!

But there are a few more things that you can do now. For example, I showed you how to write your plugin in Go with the official plugin helpers in Golang. But you might not be a Golang programmer — you may be a Rust programmer or a Java programmer or even a Javascript programmer. If so, you may want to consider writing plugin helpers for your language!

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles. Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.

We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.