Docker: Containerize your apps

A Docker container is similar to a virtual machine. It basically allows you to run a pre-packaged "Linux box" inside a container. The main difference between a Docker container and a typical virtual machine is that Docker is not quite as isolated from the surrounding environment as a normal virtual machine would be. A Docker container shares the Linux kernel with the host operating system, which means it doesn't need to "boot" the way a virtual machine would.

You can think of a Docker image as a complete Linux installation. These images use the kernel of the host system, but since they are running inside a Docker container and only see their own file system, it's perfectly possible to run a distribution like CentOS on an Ubuntu host (or vice-versa). Docker containers are isolated from the host machine by default, meaning that by default the host machine has no access to the file system inside the Docker container, nor any means of communicating with it via the network.

Docker containers run ephemerally by default, which means that every time the container is shut down or restarted it doesn't save its data — it essentially reverts to the state it was in when the container started.

Nginx (pronounced "engine-x") is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server). The Nginx image will serve all our static content.

To finally create the image, we need to run the following command from the dockerextnode/client folder, in the Docker terminal window:

$ docker build -t extclient .

Note: Because I migrated from the Boot2Docker command to the Docker Machine, I wasn’t able to build here. Instead I received the following error: “Cannot connect to the Docker daemon.” I had to run this line on my CLI first, before building. Which regenerate the TSL certificates for me.$ docker-machine regenerate-certs default

To test if it worked run:

$ docker-machine env default

To see your newly created image, run the following Docker command:

$ docker images

You will see the images that are currently installed on your workstation. It could look like this:

In case a Docker container automatically exits, because of an error, you might want to look into the logs:

$ docker logs

For example:

$ docker logs 2f9236343def

We are running in the background a new container called: “dockerextnode”, which maps port 80 to the port that the Dockerfile exposes from the image named “extclient”.

Now the container is running. To see our app inside the container we need to know the ip of the Docker Machine:

$ docker-machine ip

To see running containers use:

$ docker ps -a

This works, but only for the front-end, not for our Node.js back-end and Mongo database. Of course, you could edit the Dockerfile, and create Docker RUN commands, to install Node.js and Mongo on this image. However, that would be a bit silly, and it would take the magic powers of Docker away.
A much better approach, would be to create separate images for Sencha, Node.js and for MongoDB. That's why Docker Compose comes into play... We will look into that, in the next part of the tutorial.