Give Codeship a try

Want to learn more?

Docker is an amazing tool for developers. It allows us to build and replicate images on any host, removing the inconsistencies of dev environments and reducing onboarding timelines considerably.

To provide an example of how you might move to containerized development, I built a simple todo API using NodeJS, Express, and PostgreSQL using Docker Compose for development, testing, and eventually in my CI/CD pipeline.

In a two-part series, I will cover the development and pipeline creation steps. In this post, I will cover the first part: developing and testing with Docker Compose.

Requirements for This Tutorial

The todo app here is essentially a stand-in, and you could replace it with your own application. Some of the setup here is specific for this application, and the needs of your application may not be covered, but it should be a good starting point for you to get the concepts needed to Dockerize your own applications.

Once you have everything set up, you can move on to the next section.

Creating the Dockerfile

At the foundation of any Dockerized application, you will find a Dockerfile. The Dockerfile contains all of the instructions used to build out the application image. You can set this up by installing NodeJS and all of its dependencies; however the Docker ecosystem has an image repository (the Docker Store) with a NodeJS image already created and ready to use.

In the root directory of the application, create a new Dockerfile.

/> touch Dockerfile

Open the newly created Dockerfile in your favorite editor. The first instruction, FROM, will tell Docker to use the prebuilt NodeJS image. There are several choices, but this project uses the node:7.7.2-alpine image. For more details about why I’m using alpine here over the other options, you can read this post.

FROM node:7.7.2-alpine

If you run docker build ., you will see something similar to the following:

With only one instruction in the Dockerfile, this doesn’t do too much, but it does show you the build process without too much happening. At this point, you now have an image created, and running docker images will show you the images you have available:

The Dockerfile needs more instructions to build out the application. Currently it’s only creating an image with NodeJS installed, but we still need our application code to run inside the container. Let’s add some more instructions to do this and build this image again.

This particular Docker file uses RUN, COPY, and WORKDIR. You can read more about those on Docker’s reference page to get a deeper understanding.

You have now successfully created the application image using Docker. Currently, however, our app won’t do much since we still need a database, and we want to connect everything together. This is where Docker Compose will help us out.

Docker Compose Services

Now that you know how to create an image with a Dockerfile, let’s create an application as a service and connect it to a database. Then we can run some setup commands and be on our way to creating that new todo list.

An important concept to understand is that Docker Compose spans “buildtime” and “runtime.” Up until now, we have been building images using docker build ., which is “buildtime.” This is when our containers are actually built. We can think of “runtime” as what happens once our containers are built and being used.

Compose triggers “buildtime” — instructing our images and containers to build — but it also populates data used at “runtime,” such as env vars and volumes. This is important to be clear on. For instance, when we add things like volumes and command, they will override the same things that may have been set up via the Dockerfile at “buildtime.”

Open your docker-compose.yml file in your editor and copy/paste the following lines:

The web service

The first directive in the web service is to build the image based on our Dockerfile. This will recreate the image we used before, but it will now be named according to the project we are in, nodejsexpresstodoapp. After that, we are giving the service some specific instructions on how it should operate:

command: npm run dev – Once the image is built, and the container is running, the npm run dev command will start the application.

volumes: – This section will mount paths between the host and the container.

.:/usr/app/ – This will mount the root directory to our working directory in the container.

/usr/app/node_modules – This will mount the node_modules directory to the host machine using the buildtime directory.

environment: – The application itself expects the environment variable DATABASE_URL to run. This is set in db.js.

ports: – This will publish the container’s port, in this case 3000, to the host as port 3000.

The DATABASE_URL is the connection string. postgres://todoapp@postgres/todos connects using the todoapp user, on the host postgres, using the database todos.

The Postgres service

Like the NodeJS image we used, the Docker Store has a prebuilt image for PostgreSQL. Instead of using a build directive, we can use the name of the image, and Docker will grab that image for us and use it. In this case, we are using postgres:9.6.2-alpine. We could leave it like that, but it has environment variables to let us customize it a bit.

environment: – This particular image accepts a couple environment variables so we can customize things to our needs. POSTGRES_USER: todoapp – This creates the user todoapp as the default user for PostgreSQL. POSTGRES_DB: todos – This will create the default database as todos.

Running The Application

Now that we have our services defined, we can build the application using docker-compose up. This will show the images being built and eventually starting. After the initial build, you will see the names of the containers being created:

At this point, the application is running, and you will see log output in the console. You can also run the services as a background process, using docker-compose up -d. During development, I prefer to run without -d and create a second terminal window to run other commands. If you want to run it as a background process and view the logs, you can run docker-compose logs.

At a new command prompt, you can run docker-compose ps to view your running containers. You should see something like the following:

Name Command State Ports
------------------------------------------------------------------------------------------------
nodejsexpresstodoapp_postgres_1 docker-entrypoint.sh postgres Up 5432/tcp
nodejsexpresstodoapp_web_1 npm run dev Up 0.0.0.0:3000->3000/tcp

This will tell you the name of the services, the command used to start it, its current state, and the ports. Notice nodejsexpresstodoapp_web_1 has listed the port as 0.0.0.0:3000->3000/tcp. This tells us that you can access the application using localhost:3000/todos on the host machine.

/> curl localhost:3000/todos
[]

The package.json file has a script to automatically build the code and migrate the schema to PostgreSQL. The schema and all of the data in the container will persist as long as the postgres:9.6.2-alpine image is not removed.

Eventually, however, it would be good to check how your app will build with a clean setup. You can run docker-compose down, which will clear things that are built and let you see what is happening with a fresh start.

Feel free to check out the source code, play around a bit, and see how things go for you.

!Sign up for a free Codeship Account

Testing the Application

The application itself includes some integration tests built using jest. There are various ways to go about testing, including creating something like Dockerfile.test and docker-compose.test.yml files specific for the test environment. That’s a bit beyond the current scope of this article, but I want to show you how to run the tests using the current setup.

The current containers are running using the project name nodejsexpresstodoapp. This is a default from the directory name. If we attempt to run commands, it will use the same project, and containers will restart. This is what we don’t want.

Instead, we will use a different project name to run the application, isolating the tests into their own environment. Since containers are ephemeral (short-lived), running your tests in a separate set of containers makes certain that your app is behaving exactly as it should in a clean environment.

In your terminal, run the following command:

/> docker-compose -p tests run -p 3000 --rm web npm run watch-tests

You should see jest run through integration tests and wait for changes.

The docker-compose command accepts several options, followed by a command. In this case, you are using -p tests to run the services under the tests project name. The command being used is run, which will execute a one-time command against a service.

Since the docker-compose.yml file specifies a port, we use -p 3000 to create a random port to prevent port collision. The --rm option will remove the containers when we stop the containers. Finally, we are running in the web service npm run watch-tests.

Conclusion

At this point, you should have a solid start using Docker Compose for local app development. In the next part of this series about using Docker Compose for NodeJS development, I will cover integration and deployments of this application using Codeship.

Is your team using Docker in its development workflow? If so, I would love to hear about what you are doing and what benefits you see as a result.

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles. Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.

We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

CWSpear

Your Dockerfile could just look like this:

FROM node:7.7.2-alpine

WORKDIR /usr/app/

COPY ./package.json ./ RUN npm install --quiet

COPY ./ ./

There’s no need for the temporary directory. This benefits from the caching you mention in the same way. It also makes for a smaller image (even considering the apk update which doesn’t really seem necessary either way?).

You should also mention it’s pretty important to put node_modules in the .dockerignore (as well as a build/dist directory if you have a build step for your project).

I find it dangerous to mount the local node_modules to the host. This will seem to work just fine, however, native node modules may misbehave even if you dev machine is a linux box and not specifically alpine. Mileage will vary on Macs, and thinking about Windows makes me nauseous.

Thanks for the comment! I did it this way because I wanted to be able to edit my files on my host machine, however when you mount the folder with out mounting the node_module folder, the build time `node_modules` would be overwritten otherwise.

What I don’t go into detail here, is adding new `node_modules` – which I do directly in the container and not from my local machine. I was saving that for a later post.

Sounds good! Looking forward to the next blog post. If possible, please emphasize the importance `npm shrinkwrap` when running `npm install` on different environments to achieve repeatable results.

Matt Welke

Wouldn’t using yarn also achieve this?

Christian Katzorke

Yes sure. Beside the lockfile, yarn also provides you consistency/security with using checksums (and an enhanced local registry). But if you can’t or don’t want to (for any reason) use yarn, *please* use at least shrinkwrap. For all projects that last longer then 1 week will end up in unexpected errors on different environments/stages/continuous integration scenarios.

marcoandremartins

you can use .dockerigore for that too

Jörn Zaefferer

Have you ever written that 2nd part? I couldn’t find any links from this post nor find it directly on the blog index.

I’m specifically interested in your workflow for adding more dependencies. Adding them inside the container sounds interesting – how would you combine that with using a lock file (using yarn or npm@5)?

When developing, I don’t always run `docker-compose up`, and instead do `docker-compose run –rm web sh` and then I will start the app or update modules right there as if it were my own terminal.

Since the volumes are mapped, when ever you update something in the container it will also update locally. The next time you build the container it will run everything. I believe you should see the same with the lock files as well.

Let me know if that helps.

Dmitry Kirilyuk

It’s ok to mount container node_modules to host in most cases. This will allow to debug node_modules in your host IDE. And how often you need to debug binaries? :) But this article is outdated a bit, because node_modules will not be mapped from container by using /usr/app/node_modules as volume in docker-compose file, it will be empty in host. Don’t know nice workaround for now. What I do now is to manually install node_modules on host machine when I need to debug it.

I work directly in the containers to debug, because my host machine will have things that are not always available on the production server or containers. I much prefer to jump into the shell and figure things out there if need be.

You are right – it is an empty folder, and I could change that wording a bit. The important thing is that when you copy over everything in the Dockerfile, you don’t want to lose the installed node_modules. That is why you mount the folder without specifying the host folder. This comes in handy when I’m developing something in, for example, Python. I don’t have all the proper tooling installed locally, so doing everything in Docker makes life easier.

Happy coding!

Alejandro Bar Acedo

Hi there. I have some problems trying to run a container to use gulp. I have the same content on my Dockerfile and docker-compose.yml. When I run docker-compose run –rm service_name gulp gulp_task_name I get and error saying that the command gulp is not found in $PATH. I thought I can run any command inside a node image as it does with the esw and jest in this tutorial.

The local `node_modules/.bin/` folder is not in the path. You can resolve this by adding a line in the Dockerfile to update the path, install gulp globally, or use `docker-compose run –rm service_name ./node_modules/.bin/gulp gulp_task_name`.

Alejandro Bar Acedo

I see. Now I was seeing how is working with your sample project and if I try to run the tests using esw or jest it doesn’t work but if I try with the script from package.json it works

Thanks for the tutorial! Everything worked great, but I’m stuck on something. If I change something in my main server.js file, I have to actually run “docker-compose build” again to see the reflected changes.

Martin Pultz

I’ve been playing with docker and docker-compose recently and I can do the basics, and this article has helped my understanding even more, but the only thing I can’t seem to figure out is how to migrate the database on `docker-compose -f docker-compose.dev.yml up` when node and postgres are in separate containers. I’m trying to dockerize a project that already scripts to do this like `npm run db:migrate:dev`, but I can’t figure out where this can be executed even though I’ve bridged the containers it seems to throw an error doing `ENTRYPOINT [“npm”, “run”, “db:migrate:dev”]`. Any suggestions?

Rob Brennan

This is great!! I totally wasn’t expecting the idea of having a temporary container to run tests, but damn – that is 100% spot on. Excellent post. If you’re ever in Seattle, lemme know – I’d love to buy you an IPA (or beverage of your choice)

Since Docker 17, you do not need anymore docker-compose. It has now built in stacks support with docker-compose files declaration compatible. Also it offers extra features that haven’t been implemented in compose.