I will assume that you have already installed Docker on your machine. I
have tested these instructions both on Ubuntu Linux and OSX (OSX
users will need to install boot2docker,
instructions are not available in this guide).

Dockerfile

To create a Docker image we need to create a text file named
Dockerfile and use the available commands and syntax to declare how
the image will be built. At the beginning of the file we need to specify
the base image we are going to use and our contact informations:

We are installing version 9.3 of PostgreSQL, instructions would be very
similar for any other version of the database.

Note: it's important to have apt-get update and apt-get
install commands in the same RUN line, else they would be
considered two different layers by Docker and in case an updated package
is available it won't be installed when the image is rebuilt.

At this point we switch to postgres user to execute the next
commands:

Building Docker image

Once the Dockerfile is ready, we need to build the image before running
it in a container. Please customize the tag name using your own
docker.io hub account (or you won't be able to push it to the hub):

docker build --rm=true -t andreagrandi/postgresql:9.3 .

Running the PostgreSQL Docker container

To run the container, once the image is built, you just need to use this
command:

docker run -i -t -p 5432:5432 andreagrandi/postgresql:9.3

Testing the running PostgreSQL

To test the running container we can use any client, even the
commandline one:

psql -h localhost -p 5432 -U pguser -W pgdb

When you are prompted for password, type: pguser
Please note that localhost is only valid if you are running Docker
on Ubuntu. If you are an OSX user, you need to discover the correct ip
using: boot2docker ip

Persisting data

You may have noticed that once you stop the container, if you previously
wrote some data on the DB, that data is lost. This is because by default
Docker containers are not persistent. We can resolve this problem using
a data container. My only suggestion is not to do it manually and use a
tool like fig to orchestrate this. Fig is a
tool to orchestrate containers and its features are being rewritten in
Go language and integrated into Docker itself. So if you prepare a
fig.yml configuration file now, you will be able, hopefully, to
reuse it once this feature will be integrated into Docker. Please refer
to fig website for the instructions to install it (briefly: under Ubuntu
you can use pip install fig and under OSX you can use brew install
fig).

If you try to write some data on the database and then you stop (CTRL+C)
the running containers and spin up them again, you will see that your
data is still there.

Conclusion

This is just an example of how to prepare a Docker container for a
specific service. The difficoult part is when you have to spin up
multiple services (for example a Django web application using
PostgreSQL, RabbitMQ, MongoDB etc...), connect them all together and
orchestrate the solution. I will maybe talk about this in one of the
next posts. You can find the full source code of my PostgreSQL Docker
image, including the fig.yml file in this
repository https://github.com/andreagrandi/postgresql-docker

Other articles

If you use docker.io (or any similar service) to
build your Docker containers, it may be possible that, once the new
image is generated, you want your Docker host to automatically pull it
and restart the container.

Docker.io gives you the possibility to set a web hook after a
successful build. Basically it does a POST on a defined URL and send
some informations in JSON format.

docker-puller
listens to these web hooks and can be configured to run a particular
script, given a specific hook. It's a very simple service I wrote using
Python/Flask. It's also my first Flask application, so if you want to
improve it, feel free to send me a pull request on GitHub.

How to use docker-puller

Setting up the service should be quite easy. After you clone the
repository from https://github.com/glowdigitalmedia/docker-puller there
is a config.json file where you define the host, port, a
token and a list of hooks you want to react to. For example:

Once configured, I suggest you to setup a Nginx entry (instructions
not covered here) that for example redirect
yourhost.com/dockerpuller to localhost:8000 (I would advise to
enable SSL too, or people could be able to sniff your token). The
service can be started with: "python app.py" (or you can setup a
Supervisor script).

At this point docker-puller is up and running. Go to docker.io
automatic build settings and setup a webhook like this:
http://yourhost.com/dockerpuller?token=abc123&hook=hello

Every time docker.io finishes building and pushing your image to the
docker registry, it will POST on that URL. docker-puller will catch
the POST, check for a valid token, get the hook name and will execute
the relative script.

That's all! I hope this very simple service can be useful to other
people and once again, if you want to improve it, I will be glad to
accept your pull requests on GitHub.