I had some time recently to start playing around with Docker’s new runc / OpenContainers work. This is basically the old libcontainer, but now it’s an industry consortium governed by the Linux Foundation. So, Docker and CoreOS are now friends, or at least frenemies, which is very exciting.

The README over on runc doesn’t fully explain how to get runc to work, i.e., to run a simple example container. They provide a nice example container.json file, but it comes with without a rootfs, which is confusing if you’re just getting started. I posted a github issue comment about how to make their container.json work.

Then untar docker-ubuntu.tar into a directory called rootfs, which should be in the same parent directory as your container.json. You now have a rootfs that will work with the container.json linked above. Type sudo runc and you’ll be at an sh prompt, inside your container.

Once you have a Kubernetes cluster up and running, there are three key abstractions to understand: pods, services and replication controllers.

Pods. Pods — as in a pod of whales (whale metaphors are very popular in this space) — is a group of containers scheduled on the same host. They are tightly coupled because they are all part of the same application and would have run on the same host in the old days. Each container in a pod shares the same network, IPC and PID namespaces. Of course, since Docker doesn’t support shared PID namespaces (every Docker process is PID 1 of its own hierarchy and there’s no way to merge two running containers), a pod right now is really just a group of Docker containers running on the same host with shared Kubernetes volumes (as distinct from Docker volumes).

Pods are a low level primitive. Users do not normally create them directly; instead, replication controller are responsible for creating pods (see below).

Replication Controllers. Pods, like the containers within them, are ephemeral. They do not survive node failure or reboots. Instead, replication controllers are used to keep a certain number of pod replicas running at all times, taking care to start new pod replicas when more or needed. Thus, replication controllers are longer lived than pods and can be thought of like a manager abstraction sitting atop of the pod concept.

Services. Services are an abstraction that groups together multiple pods to provide a service. (The term “service” here is used in the microservices architecture sense.) The example in the Kubernetes documentation is that of an image-processing backend, which may consist of several pod replicas. These replicas, grouped together, represent the image processing microservice within your larger application.

A service is longer lived than a replication controller, and a service may create or destroy many replication controllers during its life. Just as replication controllers are a management abstraction sitting atop the pods abstraction, services can be thought of as a control abstraction that sits atop multiple replication controllers.

This tutorial shows you one method you can use to test out Docker Swarm on a single physical machine, like your laptop. We’ll create 3 VMs: two Swarm worker nodes and one Swarm manager.

Setting Up VMs with NAT Network

First, create three VirtualBox VMs, each with 1gb RAM and 1 CPU. Set up each one with Bridged Networking, meaning that your Linksys/Airport/whatever router will assign them each an IP on the same subnet.

In this example, I’ll use three Ubuntu machines with the hostnames below, plus static DHCP in the router to force them to always have the same IPs:

Alternate method: if you don’t want your VMs to be exposed directly on your LAN, you can use “internal networking” in VirtualBox. This will put all three VMs on the same virtual LAN within your laptop. Turn it on by doing this on the host:

You’ll need to install Docker on each machine. I won’t cover that here. Afterward, do this:

docker pull swarm

to retrieve the Swarm container, which is the same container for both the Swarm nodes and the master.

The container, by the way, just contains a single Go binary called swarm. If you re-build the binary, you can just run it directly without docker building it into a new container. I won’t cover that more advanced scenario here, though.

Running Swarm

On any machine, do this one-time operation:
docker run --rm swarm create
# gives back some token like 372cd183a188848c3d5ef0e6f4d7a963

Yesterday, I assembled a few copies of my Dual Relay Shield (rev 2). Here’s a picture of its handsome exterior:

Dual Relay Shield v2 (green thing on top) connected to a Rascal 0.6 (red and yellow thing on bottom). The DRS lets you switch 2 relays on and off to control devices up to 5 amps at 220 volts.

The shield has two relays that can switch up to 5 amps — this could be a pair of lights, motors, speakers, etc. It also has an integrated I2C temperature sensor. You could use this to build, for instance, a web-based thermostat. I expect Brandon will set up a Rascal demo or tutorial using the shield in the near future, to which I’ll link from here once it exists.

uWSGI is the HTTP server included with the Rascal. I want to fiddle with its source code and extend it, so I started by building the stock distribution on an x86 Ubuntu 11.04 box. This is how I did it.

Here’s a tut showing how to cross-compile your own device driver (as a kernel module) on the Rascal and then load/unload it into Linux on the device. Before you attempt this tutorial, you need to set up a Rascal kernel build environment. The instructions are here: rascalmicro.com/docs/build-guide.html. Even if you’ve previously done this, remember that before compiling anything you need to re-run these commands (every time you open a new terminal window):

Now for the driver building tutorial. Let’s download Dave Hylands’s gpio-event driver and usermode application for the Gumstix Overo. We can compile this driver for the Rascal unmodified once we get some paths set correctly in the Makefile. Check out the Rascal kernel git branch to ~/rascal/linux-2.6 and download Hylands’s code to ~/rascal/gpio-event. Modify ~/rascal/gpio-event/module/Makefile to use these alternate variable definitions: