Learning Docker - The Command Line Interface

From Virtual Machines (VMs) to Docker

Learning development on the JavaScript stack can sometimes be quite frustrating (even with tools like Node or NVM). This is especially true when you manage different applications with different environment dependencies (or different versions of node, npm etc.) for each app that you're developing. You may have some legacy apps that you don't touch very often, but you want your host machine to have the latest and greatest when you start your next project. That's understandable and we've all been there before.

I used to reach for VMs to solve this problem. It made managing my various projects’ dependencies and environments a breeze. The VMs helped to ensure consistent behaviour between my home and work machines.

However, VMs aren’t all glitz and glamour. They require more overhead. VMs can take several minutes to create and launch whereas Docker containers can be created and launched in just a few seconds since they’re much smaller. This also means that an application in a Docker container can run anywhere up to twice as fast as in a virtual machine and more containers can share a host because of the shared OS not being duplicated across each VM.

This was more than enough to get me excited to learn more about Docker. And, if you’re reading this, I’m sure you’re already convinced, too.

About Docker

What is Docker?

Docker is a tool we can use to isolate and containerize our application and the environment in which it runs from that of our host machine. This allows us to also version our entire "stack" and all the requirements and dependencies for the application to run within version control.

Why is Docker so great?

Docker is great for a plethora of reasons, but personally, the main advantage is how easily Docker can be run locally and deployed in production, and how it helps ensure that what works on my machine will work on every machine.

Note: I have run into instances where some of the terminal commands run to execute/build docker commands/images produce some errors when running on Windows, but with minor testing and tweaking, cross platform compatibility can be achieved.

What does Docker solve?

It eliminates the "well, it works on my machine" excuse from your vocabulary. It also helps ensure that when a new developer is on-boarded to your project, once they have git and docker installed, they will be ready to run your application which minimizes the staff onboarding time.

For our demo, we will be aliasing npm commands to Docker commands to make the development cycle more familiar to developers, and so, NPM is a dependency on the host machine, simply to kick start the docker commands.

Digging In

Fear not the Terminal

To get started with this demo, you should have Docker installed on your host machine. Additionally, as mentioned above, in parts 2 and beyond, we will require npm (v6.4.x or later) and node (v8.11.x or later) on your host machine.

This was written and executed on MacOS. If you notice any commands that do not work for Windows, please leave a comment to help out other readers. I'll try to update my notes here too.

Installing Docker

A quick Google search for "install docker" eventually takes you here: https://docs.docker.com/install/. The installation steps have changed (in terms of where to find the download) a few times. I believe you will need to register for an account with Docker (and/or Docker Hub) but don't worry, we'll be using these later in the series to publish our images to a Docker repository.

Once you've completed the installation, join us back here and we'll review a handful of commands we'll be using over and over to make our lives easier.

Note on Terminology: We'll be using terms like build, image, and container a lot in this article, so I wanted to define them upfront.
The build is the process of taking the steps outlined in the Dockerfile (more on that later) to create your image.
The image is like an onion, built up in layers from your Dockerfile. This is your primary asset.

Similar to a class definition, the container is an instance of your image. A container is to an image, like an object is to a class. You can have many containers of the same image if required.

Simple Commands

So now docker should be installed and running. You can confirm this by running docker -v in your terminal. At the time of this article, I am using Docker version 18.09.0, build 4d60db4. You don't need this version, but later in this series, I am planning to use some of the newer features. I'll try to comment on those sections where a specific version (or newer) would be required.

Running docker in your terminal will give you a long list of commands you can use to manage your Docker assets. Let's look at a few:

docker ps

Returns a list of running containers

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Ok, so we see ... nothing. That's because we have no containers running. Let's run our first container using a predefined image we can pull from Docker's repository. It's time for hello-world!

docker images

This shows us a list of built images that we have on our machines. You can easily bring up any of these in a container using the command we will look at next: docker run [IMAGE]:[TAG|'latest']

docker run

If the image does not exist locally, it will pull, and run a container of the image provided

$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.

...
So what happened here? Basically, Docker looks at your local list of images. Since we pulled this image from Docker hub in the last step, Docker found the hello-world image locally. Docker then created a container running this image.

If you read the output from our hello world, they even recommend what to try next.

To try something more ambitious, you can run an Ubuntu container with:$ docker run -it ubuntu bash
Let's see what happens.

Now we're inside our Docker container! Crazy right? Try out some simple commands, muck around. Doesn't matter. We'll review how to kill the container shortly. Right now, nothing inside your container is linked to your file system so anything you do in here will not be persisted.

Go play! I'll wait...

Ok. So, before you $ exit your container, let's open a new terminal to see a little more of what's going on in your new container. Run the docker ps command again and you should see an entry like this:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fab2218a261 ubuntu "bash" About a minute ago Up About a minute elated_tesla

We have 2 unique identifiers here that we should take note of, the CONTAINER ID and the NAME. This will allow us to get back into our container from another terminal window. We can try that as we review the next command..

docker exec

With this Docker command, we can execute a command against our running container, and we can even tell Docker that we want to remain interactive -i and we want to open a terminal -t using flags in the docker exec command. So, take the CONTAINER ID or the NAME from your docker ps command, and let's get back into our running container by running docker exec with the -it flags.

$ docker exec -it elated_tesla bash
root@3fab2218a261:/#

And we're right back in the terminal of our container! Pretty cool, eh? Ok, let's start cleaning up. Simply exit out of the container in our second terminal window running $ exit.

Let's see the command we would use to stop our container if we weren't logged into the terminal. This is how you will be stopping containers that are running in detached -d mode.

Note on Flags: If you read the docker run section above, you’ll see these same flags (-it) used. This allowed us to run our container, and exec against the newly created container and remain interactive with it.
Had we instead used the -d flag in our docker run -d ... command, we would need to use the docker exec -it ... command to gain an interactive terminal.

docker stop

$ docker stop elated_tesla
elated_tesla

Now, if we run docker ps again, we'll see an empty list. Well, this is a little misleading. Since Docker isn't persisting to our host machine, it didn't just kill off everything we've done. The container still exists, but we can't see it. Let's add a flag to our docker ps command.

Found em! The problem is, they're still consuming some (not many, but some) resources. Let's fully clear them out. Keep in mind that anything you did inside of the container really will be lost once we fully remove the container, but in this case that is okay. We want a fresh start.

docker rm

$ docker rm elated_tesla
elated_tesla

When we run docker ps -a again, we'll see the container has been removed from our list of stopped containers. However, if we run docker images we will see that ubuntu is still an image we have on our host machine. This just means when we spin up a new container, we won't need to download anything from Docker Hub (unless we use a different tag than the one we have locally.)

Although we won't go into it in this article, if system resources ever become an issue, you can run docker image prune or docker container prune to free up some resources.

Recap

Ok! So, we covered off how to pull images, start and stop containers, and remove and execute against them. We're at a pretty good place, so we are ready to start looking at building our own custom Docker images! Stay tuned for part two of this series where we'll dockerize a simple Vuejs app!