Before I actually build my container, I need to produce a source file called a Dockerfile. This is a standard name that the docker build process uses – much like npm uses package.json and Gulp uses Gulpfile.js, docker uses Dockerfile – no extension. In the Dockerfile you put the process needed to build the container.

# We are basing our container on Ubuntu Linux 14.04
FROM ubuntu:14.04
# Install the pre-requisites
RUN apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_0.12 | bash -
RUN apt-get install -y nodejs
# Copy the application into the application area
COPY . /src
RUN cd /src; npm install
RUN chmod 755 /src/run_docker_image.sh
# Expose our TCP/IP port to the host system
EXPOSE 3000
# Set up the correct run command so that it starts with our container
CMD [ "/src/run_docker_image.sh" ]

There are five distinct sections here:

We base our container on another container – in this case, the official Ubuntu Linux 14.04 distribution that is available on hub.docker.com. We could make our source container the node distribution, but then the Node release might change from under us. I don’t like that – I prefer to install known versions of all software.

We then install the necessary software. The Ubuntu Linux distribution doesn’t come with curl and we need curl in order to execute our Node install instructions. Other than that, this is a direct copy of the process we developed in the last article.

Next we copy the current source files into the source directory. We also update the npm sources within the container. Finally, we adjust the permissions on the shell script we have to start up the container (more on that in a moment)

Our application listens on port 3000 – we need to be able to expose that to the outside world.

Finally, we need to run our application when the container starts. The CMD statement is the command to run when the container starts.

I can check this file into source code control and hence it’s a controlled resource that produces the same container when I build it. There is an additional file – run_docker_image.sh – this starts the server. Up until this point, I’ve been running node ./server.js to run the server. In a container, you don’t know where you will be. However, you do know where the image is. I wrote this shell script to ensure the server is started from the right place:

#!/bin/sh
cd /src
node ./server.js

Again – this can be placed in the source code (and it is – check out the GitHub Repository). I can use ./run_docker_image.sh to run the server now. This is the same thing that happens when the container is run.

Talking of which, it’s time to build a container:

docker build -t adrianhall/grumpy-wizards .

There is a significant amount of work to be done the first time a container is built on a server. Ubuntu 14.04 needs to be brought down and all the set up needs to be done. Subsequent runs of the build should go much faster because the network traffic is much less. You can run docker images now to see the image:

Node the adrianhall/grumpy-wizards container. Now we can run it!

docker run -p 3000:3000 -d adrianhall/grumpy-wizards

The docker system will spit out a really long hex number. Ignore it. It means the run command was successful.

Note the -p argument. You can read this as “map port 3000 on the local machine to port 3000 within the container”. If you ran two copies of the same container, you would need something like “-p 3001:3000” to map port 3001 on the local machine to port 3000 of the container. Then you could access the same application through both port 3000 and port 3001.

Running docker ps should show you the container running:

You can also use docker stop to stop the container:

You can also see what containers are not running by using docker ps -l:

Finally, if you want to start that container again, us docker restart:

That’s it for docker. There is always more to learn about this topic, but this is enough to start containerizing my application.