Feature 1: Standard Linux support

Merged in 0.7.0. Special thanks to Alex Larsson and the Red Hat team, and to Junjiro R. Okajima for making aufs possible

This version introduces several major new features, but the most anticipated was definitely standard Linux support. As of today, Docker no longer requires a patched Linux kernel, thanks to a new storage driver contributed by Alex Larsson (see the next feature, “storage drivers”). This means that it will work out-of-the-box on all major distributions, including Fedora, RHEL, Ubuntu, Debian, Suse, Gentoo, Arch, etc. Look for your favorite distro in our installation docs!

Feature 2: Storage drivers

Merged in 0.7.0.

A key feature of Docker is the ability to create many copies of the same base filesystem almost instantly. Under the hood Docker makes heavy use of AUFS by Junjiro R. Okajima as a copy-on-write storage mechanism. AUFS is an amazing piece of software and at this point it’s safe to say that it has safely copied billions of containers over the last few years, a great many of them in critical production environments. Unfortunately, AUFS is not part of the standard linux kernel and it’s unclear when it will be merged. This has prevented docker from being available on all Linux systems. Docker 0.7 solves this problem by introducing a storage driver API, and shipping with several drivers. Currently 3 drivers are available: AUFS, VFS (which uses simple directories and copy) and DEVICEMAPPER, developed in collaboration with Alex Larsson and the talented team at Red Hat, which uses an advanced variation of LVM snapshots to implement copy-on-write. An experimental BTRFS driver is also being developed, with even more coming soon: ZFS, Gluster, Ceph, etc.

When the docker daemon is started it will automatically select a suitable driver depending on its capabilities. If your system supports AUFS, it will continue to use the AUFS driver and all your containers and images will automatically be available after you upgrade. Otherwise devicemapper will be used as the default driver. Drivers cannot share data on the same machine, but all drivers produce images which are compatible with each other and with all past versions of docker. This means every image on the index (including those produced by Trusted Builds) remains usable across all docker installations.

Feature 3: Offline transfer

Merged in 0.6.7. Special thanks to Frederic Kautz

With Offline Transfer, container images can now be exported to the local filesystem as a standalone bundle, and loaded into any other docker daemon. The resulting images will be fully preserved, including configuration, creation date, build history, etc. The exported bundles are regular directories, and can be transported by any file transfer mechanism, included ftp, physical media, proprietary installers, etc. This feature is particulary interesting for software vendors who need to ship their software as sealed appliances to their “enterprise” customers. Using offline transfer, they can use docker containers as the delivery mechanism for software updates, without losing control of the delivery mechanism or requiring that their customers relax their security policies.

As David Calavera from the Github enterprise team puts it: “Building on-premise products based on Docker containers is much easier now thanks to offline transfer. You can always make sure your containers arrive to places without registry access”

To use offline transfer, check out the new docker save and docker load commands.

Feature 4: Advanced port redirects

Merged in 0.6.5.

Note: this feature introduces 2 small breaking changes to improve security. See the end of this section for details.

The run -p flag has been extended to give you more control over port redirection. Instead of automatically redirecting on all host interfaces, you can specify which interfaces to redirect on. Note that this extends the existing syntax without breaking it.

For example:

-p 8080 will publish port 8080 of the container to all interfaces of the host with a dynamically allocated port

-p 8080:8080 will publish port 8080 of the container to all interfaces of the host with a static port of 8080

-p 127.0.0.1:80:80 # Publish port 80 of the container to localhost of the host with a static port to 80

You can also choose to not redirecton any host interface, effectively making that port unreachable from the outside. This is very useful in combination with links (see “Links” below), for example to expose an unprotected database port to an application container without publishing it on the public internet. You can do this without a Dockerfile thanks to the new -expose flag.

This release introduces two breaking changes to improve security:

First, we are changing the default behavior of docker run to not redirect ports on the host. This is better for security: ports are private by default, and you can explicitely publish them with the -p flag. If you currently rely on exposed ports being published on all host interfaces by default, that will no longer be true in 0.6.5. You can revert to the old behavior by simply adding the appropriate -p flags.

Second, we are deprecating the advanced “<public>:<private>” syntax of the EXPOSE build instruction. This special syntax allowed the Dockerfile to specify in advance that the exposed port should be published on a certain port on all host interfaces. We have found that this hurts separation of concerns between dev and ops, by restricting in advance the system administrator’s ability to configure redirects on the host. The regular “EXPOSE <private>” syntax is not affected.

For example:

Not deprecated: EXPOSE 80 will continue to expose tcp port 80 as usual.

Deprecated: EXPOSE 80:80 will trigger a warning, and be treated as identical to EXPOSE 80. The public port will simply be ignored.

We apologize for these breaking changes. We did our best to minimize the inconvenience, and we hope you’ll agree that the improvements are worth it!

Feature 5: Links

Merged in 0.6.5.

Links allow containers to discover and securely communicate with each other. Inter-container communication can now be disabled with the daemon flag -icc=false. With this flag set to false, Container A cannot access Container B unless explicitly allowed via a link. This is a huge win for securing your containers. When two containers are linked together Docker creates a parent child relationship between the containers. The parent container will be able to access information via environment variables of the child such as name, exposed ports, ip, and environment variables.

When linking two containers Docker will use the exposed ports of the container to create a secure tunnel for the parent to access. If a database container only exposes port 8080 then the linked container will only be allowed to access port 8080 and nothing else if inter-container communication is set to false.

Example

When running a WordPress container we need to be able to connect to a database such as MySQL or MariaDB. Using links we can easily swap out the backend database and not have to change our configuration files for the wordpress site.

In order to build a wordpress container that works with both databases the container should look for the alias, in our example `db`, when linking to the database. This will allow you to access the database information via consistent environment variables no matter what the name of the container is. Using the two database containers from the naming example we will create a link between them to our wordpress container.

To link just add the -link flag to docker run:

docker run -d -link mariadb:db user/wordpress or

docker run -d -link mysql:db user/wordpress

After creating the new container linked into the database container with the alias db you can inspect the environment of the wordpress container and view the ip and port of the database.

$DB_PORT_3306_TCP_PORT

$DB_PORT_3306_TCP_ADDR

The environment variables will be prefixed with the alias you specified on the -link flag.

Feature 6: Container naming

Merged in 0.6.5

We are happy to announce that we can finally close issue #1! You can now give memorable names to your containers using the new -name flag for docker run. If no name is specified Docker will automatically generate a name. When you link one container to another you will have to provide the name and alias of the child that you want to link to via -link child_name:alias.

Examples

Enough talk, let’s see some examples! You can run two databases with corresponding names like so:

docker run -d -name mariadb user/mariadb

docker run -d -name mysql user/mysql

Every command that worked with a container_id can now be used with a name that you specified:

docker restart mariadb

docker kill mysql

Feature 7: Quality

Special thanks to Tianon Gravi, Andy Rothfusz and Sven Dowideit

Obviously, “quality” isn’t really a feature you can add. But it’s become important enough to us that we wanted to list it. Quality has been a big focus in 0.7, and that focus will only increase in future versions. It’s not easy keeping up with a project as incredibly active as Docker – especially when its success came so quickly. When the floodgates opened, much of the code was still immature, unfinished or scheduled to be refactored or rewritten. 6 months and 2,844 pull requests later, much of it is still there, drowned in a daily deluge of contributions and feature requests.

It took us a while to find our bearings and adapt to the new, crazy pace of Docker’s development. But we are finally figuring it out. Starting with 0.7, we are placing Quality in all its forms – user interface, test coverage, robustness, ease of deployment, documentation, consistency of APIs – at the center our development process. Of course, there is no silver bullet: Docker still has plenty of bugs and rough edges, and the progress will be slow and steady rather than the elusive “grand rewrite” we initially hoped for. But day after day, commit after commit, we will continue to make gradual improvements.

We are grateful that so many of you have understood the potential of Docker and shown so much patience for the quirks and bugs. “It’s OK”, you said. “Docker is so useful I don’t mind that it’s rough”. Soon we hope to hear you say “I use Docker because it’s useful AND reliable”.

Norberto

A few weeks ago I wanted to use docker for my mail server, using one docker for every component (postfix, policyd, cyrus, etc.) but when I tried to start the cyrus docker it failed because AUFS lacks some feature (I can’t remember right now the error message).

After this episode I simply cannot trust AUFS and I want to switch to DM. And, if possible, migrate all my containers from AUFS to DM.

[…] the creation and deployment of applications within containers; it was covered here in October. The 0.7 release is now available, with a long list of new features, including the ability to run on standard Linux […]

Christopher J. Bottaro

Excuse my ignorance, but is “Feature 2: Storage drivers” one of the bigger milestones towards getting Docker running natively in OS X? Also, is that feature (natively working in OS X) road mapped for a specific Docker version? Or is there a ticket where I can track that progress?

Hi Christopher; storage drivers are a significant milestone, but there’s still a lot of work! Namely, we need to implement runtime drivers (to replace Linux Containers with something OS-agnostic); then we need to implement an optimized storage driver for OS X (maybe based on ZFS, maybe something else).

[…] Docker 0.7 runs on all Linux distributions – and 6 other major features | Docker Blog November 26, 2013 Docker 0.7 runs on all Linux distributions – and 6 other major features So, Docker 0.7 is finally here! We hope you’ll like it. On top of countless bug fixes and small usability impr… […]

[…] the creation and deployment of applications within containers; it was covered here in October. The 0.7 release is now available, with a long list of new features, including the ability to run on standard Linux […]

Martin

Ulises Reyes

This has never been a problem in Docker itself but rather in the AUFS3 module. The default is set to 42 but on my custom kernel I have it set to somewhere in the 32000 range (There’s some preset options)

[…] the creation and deployment of applications within containers; it was covered here in October. The 0.7 release is now available, with a long list of new features, including the ability to run on standard Linux […]

“Currently 3 drivers are available: AUFS, VFS (which uses simple directories and copy) and DEVICEMAPPER, developed in collaboration with Alex Larsson and the talented team at Red Hat, which uses an advanced variation of LVM snapshots to implement copy-on-write.”

DEVICEMAPPER requires the “thinp” device mapper target. This thing is available since kernel 3.2 (and has been back-ported to RHEL kernels as well). VFS doesn’t require any special kernel feature. Actually it works even on non-Linux targets 🙂

[…] the creation and deployment of applications within containers; it was covered here in October. The 0.7 release is now available, with a long list of new features, including the ability to run on standard Linux […]