A Lifting Experience at the Dock

If you want to distribute your programs across multiple platforms, you need to prepare them to run in foreign environments from the start. Linux container technology and the resource-conserving Docker project let you test your own Perl modules on several Linux distributions in one fell swoop.

Virtualization as a panacea? No way! Instead of abstracting the complete hardware and operating system, the Docker project builds on support for Linux containers (LXC) [1] in newer Linux kernels and isolates environments at the process and filesystem level. Savings in terms of memory consumption and significant performance gains are good reasons for using containers instead of classical virtualization. Each server effortlessly running multiples of mutually isolated applications can open up completely new possibilities at the data center.

As in true virtualization, insulated containers decouple their components. For example, one big advantage: If two applications use the same library, but different versions, it is not an impediment, because each container automatically comes with everything it needs.

The Docker project [2] is based on the LXC features of newer Linux kernels and boots up a daemon that manages all Docker containers [3]. It runs on the host system and on VMs. In other words, if you have an older system that does not yet have Docker support in the kernel, you can simply produce, say, an Ubuntu 13 image with Vagrant [4] and install Docker on it [5].

Docker containers offer much more than LXC alone, starting with the standard application format, which allows you to use custom containers on any other Docker system through the filesystem layer.

An application installs on top of a base image; Docker only defines the differences and conveniently versions them with Git-like precision.Docker also defines a network interface between the containers, so the applications running in it can use Unix ports to communicate with one another. The fact that users can deposit their defined Docker containers in a public or private repository for subsequent use additionally increases Docker acceptance in the community. The project has been going for more than a year now, and the software has been ready for production, according to its developers, since version 0.8. Version 1.0 is supposed to appear later this year.

In this article, I use the Docker technology to test home-grown CPAN modules. To do this, Docker creates multiple containers with different Linux distributions, then automatically tests whether the tarball of a CPAN module can be installed in them.

Residential Containers with Great Interior Design

Users first pick a basic version of a new container from the public repository and then set it up according to their own requirements. For example,

docker pull ubuntu
docker run -i -t ubuntu /bin/sh

downloads a 200MB Ubuntu image off the Web and then opens a shell in the container. In it, you can then run apt-get update and apt-get install to install new Ubuntu packages to your heart's content. The -i option sets interactive mode, -t opens a pseudo-terminal for controlling the shell.

A file named Dockerfile ties down container settings in source control; Listing 1[6] shows an example that uses the FROM directive to define the base image from which to derive the container – in this case, ubuntu. In addition to the standard image, the docker search ubuntu command uncovers another 683 images that are available for download.

The lines initiated by the RUN keyword in Dockerfile send commands that tell docker to upgrade the container. After an apt-get update to refresh the package index, lines 5 and 6 install the libwww-perl package for downloading CPAN modules, and the make utility for building and testing the same, in the Perl test container.

CPAN Minus as a Plus

The cpanminus package in line 4 includes the cpanm utility that not only tests CPAN modules but also unzips and installs distribution tarballs and launches their unit test suite. This is not always trivial, because Perl modules often define dependencies to other CPAN modules that first need to be retrieved, tested, and installed before the installation of the actual module can begin.

All the containers I will be dealing with in this article install this utility to meet the requirements of the Linux distribution in question. This means the test script, which tests the tarball in all configured containers, can run a standardized command each time.

Auto-Yes

It is important for the command running in the container to avoid interacting with the user. If there is no interactive connection with the container through a pseudo-terminal, questions automatically lead to an error and to the command aborting. Ubuntu apt-get always asks the superuser Do you want to continue [Y/n]?; that is, whether you really want to install a requested package, where y says that you really do. Dockerfile in Listing 1 calls apt-get with the -y option for this reason; this prevents the prompt and immediately starts installing.

The Dockerfile defined in Listing 2, on the other hand, defines the contents of the Perl test container for Arch Linux. This Docker image is located under base/arch in the public repository. In the distribution, the name of the package manager is pacman; it usually only installs packages if the user responds to the question Proceed with installation? [Y/n] by typing y. With the --noconfirm option, it goes ahead without the prompt.

Although Arch Linux comes with a Perl binary out the box, it lacks the tar and make utilities. The cpanm script is contained in the Arch community/cpanminus package; Dockerfile thus calls pacman to install all of these dependencies when setting up the container.

However, Arch Linux installs the cpanm script in the path /usr/bin/vendor_perl, which is not in the shell's $PATH variable. Because the test script later calls cpanm without the path, the second RUN command uses ln-s to create a symbolic link below /usr/bin. This is exactly where the shell searches for commands on Arch Linux, and this in turn ensures platform parity for all test containers.

Docker is an economical alternative to conventional virtualization. Because each Docker container shares the underlying operating system, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.