Vishal Kumar is a software architect and a JavaScript enthusiast. He has spent 14 years developing web applications for small to big enterprises. He is the founder of bizsitegenie.com, a web-based visual interface to create web applications using the MEAN stack and Docker.

Give Codeship a try

Want to learn more?

The rate of adoption of Docker as a containerized solution is soaring. A lot of companies are now using Docker containers to run apps. In a lot of scenarios, using Docker containers can be a better approach than spinning up a full-blown virtual machine.

In this post, I’ll break down all the steps I took to successfully install and run a web application built on the MEAN stack (MongoDB, Express, AngularJS, and Node.js). I hosted the application in Docker containers on Amazon Web Services (AWS).

Also, I ran the MongoDB database and the web application in separate containers. There are lots of benefits to this approach of having isolated environments:

Since each container has its own runtime environment, it’s easy to modify the environment of one application component without affecting other parts. We can change the installed software or try out different versions of the softwares, until we figure out the best possible setup for that specific component.

Since our application components are isolated, security issues are easy to deal with. If a container is attacked or a malicious script ends up being inadvertently run as part of an update, our other containers are still safe.

Since it is easy to switch out and change the connected containers, testing becomes a lot easier. For example, if we want to test our web application with different sets of data, we can do that easily by connecting it to different containers set up for different database environments.

MEAN Web Framework

The web application that we’re going to run is the framework code for MEAN.JS. This full-stack JavaScript solution builds fast, robust, and maintainable production web applications using MongoDB, Express, AngularJS, and Node.js.

Another great advantage of MEAN.JS is that we can use Yeoman Generators to create the scaffolding for our application in minutes. It also has CRUD generators which I have used heavily when adding new features to the application. The best part is that it is already well set up to support Docker deployment. It comes with a Dockerfile that can be built to create the container image, although we will use a prebuilt image to do it even faster (more on this later).

Running Docker on an Amazon Instance

You might already be aware that you can use basic AWS services free for a full year. The following steps will walk you through how to configure and run a virtual machine on AWS along with Docker service:

To begin, create your free account on AWS.amazon.com. Make sure to choose the Basic (Free) support plan. You will be redirected the AWS welcome page.

On this page, click on Launch Management Console.

We will first go through the steps to create a user with the required credentials to manage our instance. Go to Services and then IAM.

Go to the Users link on the left.

Click Create New Users.

Create a new user by providing a username. Make sure you have the checkbox to Generate an access key for each user checked.

Once the user is created, you will get the option to Download Credentials for this user. Download the file. This file will have the Access Key Id and the Secret Access Key for this user. We will need to use these credentials to connect remotely to AWS.

The recommended approach for using credentials when connecting remotely to AWS is to keep the credentials in a file at the path ~/.aws/credentials. Create this path if it does not exist in your local computer and then create a file with the name credentials. The content of this file should look like this:

After creating the user and storing his credentials locally, we will also need to give the required permissions to this user in AWS.

Go to Services->IAM->Groups. Click Create New Group.

Go through the wizard steps to create the new user group. Enter the groupname and click Next.

You will then see the option to attach a Policy. Check the first option, Administrator Access, and click Next.

You will see a Review screen. Click Create Group to finally create the group.

Once the group is created, click on the name of the group to see a screen to manage users, permissions, etc. Add the user that you previously created to this group by clicking Add Users to Group.

We will also need a vpc-id to launch our instance. While still logged on to AWS Console, click Services->VPC. Click on the link to see your VPC details. Copy the vpc-id from the grid view that appears. Now we are all set to launch our instance.

To launch a new instance on AWS remotely from your computer, open a new shell window. Then use the Docker Machine create command to create a new EC2 instance. Use the amazonec2-vpc-id flag to specify the vpc-id. Use the amazonec2-zone flag to supply the zone in which your instance should exist. Finally, specify the name by which you would like to refer to the instance.

$ docker-machine create --driver amazonec2 --amazonec2-vpc-id [vpc-id-copied-in-previous-step] --amazonec2-zone c aws07
Running pre-create checks...
Creating machine...
(aws01) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env aws07

Once this instance is launched and ready for use, we can tell docker execute commands against aws07 as follows:

$ eval $(docker-machine env aws07)

Running MongoDB Database as a Container

Now that we have Docker running on our Amazon instance, we can go ahead and run our containers.

As I mentioned before, we’re going to run our MongoDB database and our web application on separate containers. I chose the official repo for Mongo on the docker repository. We can pull this image and run it as a detached container in one simple step:

$ docker run --name mymongodb -d mongo

The last argument mongo is the name of the image from which it should create the container. Docker will first search for this image locally. When it doesn’t find it, it will go ahead and download it and all the base images that it is dependent on. Convenient!

Docker will then run this image as a container. -d flag ensures that it is run in detached mode (in the background) so that we can use this same shell to run our other commands. We can do a quick check after this to make sure that this container is up and running by using the docker ps command:

The startup script script for this image already runs the mongo service listening on 27017 port by default. So there is literally nothing else we had to do here except for that one docker run command.

Running the MEAN Stack Container

The next phase of this project is to run our web application as a separate container.

The MEAN stack code base has a lot of dependencies like Node, Bower, Grunt, etc. But once again, we don’t need to worry about installing them if we have an image that already has all these dependencies. Turns out there is an image on the Docker Hub that already has everything we need.

Now there is a lot going on with this single command. To be honest, it took me some time to get it exactly right.

The most important piece here is the --link mymongodb:db_1 argument. It adds a link between this container and our mymongodb container. This way, our web application is able to connect to the database running on the mymongodb container. db_1 is the alias name that we’re choosing to reference this connected container. Our MEAN application is set to use db_1, so it’s important to keep that name.

Another important argument is -p 80:3000, where we’re mapping the 3000 port on the container to port 80 on the host machine. We know that web applications are accessed through the default port of 80 using the HTTP protocol. Our MEAN application is set to run on port 3000. This mapping enables us to access the same application from outside the container over the host port 80.

We of course have to specify the image from which the container should be built. As we discussed before, maccam912/meanjs:latest is the image we’ll use for this container.

The -i flag is for interactive mode, and -t is to allocate a pseudo terminal. This will essentially allow us to connect our terminal with the stdin and stdout streams of the container. This stackoverflow question explains it in a little more detail.

The argument bash hooks us into the container where we will run the required commands to get our MEAN application running. We can bash into a previously running Docker container, but here we are doing all that with just one command.

Building and Running our MEAN Application

Now that we’re inside our container, running the ls command shows us many folders including one called Development. We will use this folder for our source code.

cd into this folder and run git clone to get the source code for our MEAN.JS application from GitHub:

A couple of hiccups to watch out for: For some reason, my npm install hung during a download. So I used Ctrl + C to terminate it, deleted all packages to start from scratch, and ran npm install again. Thankfully, this time it worked:

Install the front-end dependencies running by running bower. Since I’m logged in as the super user, bower doesn’t like it. But it does give me an option to still run it by using the --allow-root option:

root@7f4e72af1cf0:/Development/meanjs# bower install
bower ESUDO Cannot be run with sudo
....
You can however run a command with sudo using --allow-root option
root@7f4e72af1cf0:/Development/meanjs# bower install --allow-root

Run our grunt task to run the linter and minimize the js and css files:

Now, we are ready to run our application. Our MEAN stacks looks for a configuration flag called NODE_ENV, which we will set to production and use the default grunt task to run our application. If you did all the steps right, you should see this final output:

Validating Our Application from the Browser

Our application would have given errors if there was some problem running it or if the database connection failed. Since everything looks good, it’s time to finally access our web application through the browser.

Click on the security group link for the given instance. You should see the settings page for the security group.

Click the “Inbound” tab at the bottom, and then click the “Edit” link. You should see that SSH is already added. Now we need to add HTTP to the list of inbound rules.

Click “Add Rule.”

Select HTTP from the dropdown menu and leave the default setting of port 80 for the Port Range field. Click “Save.”

Pick up the URL to our instance from the Public DNS column and hit that URL from the browser. You should see the homepage of our fabulous application. You can validate it by creating some user accounts and signing in to the app.

So that’s it. We’ve managed to run our application on AWS inside isolated Docker containers. There were a lot of steps involved, but at the crux of it all, we really needed to run only two smart Docker run commands to containerize our application.

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles. Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.

We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

EnvyAndroid

Would it not be better to use coreos for hosting docker containers?

You could also use the –restart=always flag to automatically restart your mongo container on reboot.

Proper backup can ensure we do not lose the data if the container instance is somehow lost. Of course we can save the whole container state as an image with one command, but the appropriate thing to do here is to backup the files where the mongod service is flushing the data. You would want the files to be on a mounted shared folder that the docker container and its mongod service has access to. This way the persisted data is independent of the container and it’s state. You can then spin up a new container or a different mongod service to access the same file data as need be.

Sarath

Great article.. very nice step by step illustration.. thank you.. In final step when running grunt I am getting this error.. What could have been wrong ?

Hi Sarath, It clearly means that the connection to the MongoDB instance could not be established. Things to check for: 1) Is the MongoDB container up and running. 2) Check again on the command you used to pipe your connection between the MeanJS container and the Mongo one when you started the MeanJS container. 3) The MeanJS site can be run in dev and production. Make sure the connection info in the config for the environment you are running in matches with the MongoDB instance.

Guy Ellis

Thanks! Looking at the docker file for MEANJS https://registry.hub.docker.com/u/maccam912/meanjs/dockerfile/ it looks like the MEANJS container installs MongoDB in its container as well. I completely agree with you that Node and Mongo should be in separate containers. I’m guessing that maccam912 is just making it easier for users to get started with that container as a completely self contained solution to running MEANJS. I’m wondering if that was part of your space problems? i.e. 2 installations of MongoDB.

Hmm. The Docker images still work. I used them couple of days back. So node-gyp should work. Unless it errored out because of a resource problem (most probably storage). How much volume space did you allocate to your instance?

Sorry for the delayed response Wilson. I had to find the time to run these steps again and see what’s going on. Turns out you are right. Meanjs project that I have referenced in this blog is undergoing heavy development and the source code has changed since this blog. The good news is that, even though I got that node-gyp error, I was still able to run the application with some workarounds. Seems like it is an optional dependency. After the npm install, I did a bower install. One more thing. It looks like Ruby and Sass have also been added as new dependencies. So if you do a grunt as the next step, it will complain and ask you to install them. I just bypassed that by running “node server.js” directly instead of running the grunt command. Hopefully the source code will stabilize in the near future.

GameKyuubi

I got it working by installing nvm and then forcing a stable version of node instead of a pre build, and then installing ruby-compass.

Ignoring the npm ‘gyp’ errors and starting via ‘node server.js’ seems to work. I’m new to AWS so could be missing something obvious but I cannot connect to neither the IP nor the public DNS. Are the instructions complete above?

All the individual technologies of MEAN take a bit of time to master. meanjs.org site recommends the best resources for that. You can maybe start with an overall introduction. Here is a two hour tutorial I had created couple of months back: http://bit.ly/1KlBzSk You will also find a lot of good blog posts explaining it.

GameKyuubi

I mean more specifically how to work with it in a familiar dev environment once it’s set up via Docker like this.

I keep the code in my git repository. And my code editor is outside of the Docker container. After my changes, I checkin the code. Then SSH into the Docker container and do a git pull from the appropriate folder.

Thanks for this. I started to work through a tutorial with all MEAN components running on my host but it seems far preferable, and not that much more difficult, to put it up on someone else’s hosted servers – especially given that I want to write an actual single page app at some point. Some questions:

1. this was totally free on aws for one year? When I looked I got the feeling that there were some other costs for hosting mongodb etc

2. in terms of maintenance do you have a feel for how much work is needed in the background to ensure that the various component frameworks are kept secure and up to date?

3. is it fairly straightforward to move content off of aws to another hosted solution when the year is up? what aspects should one bear in mind when moving to a different provider’s platform (equivalency of MEAN components etc)?

4. I assume it’s fairly trivial to buy a domain name and then register it with DNS to point to my free aws server?

Sorry if these are a bit naive as I’m new to this whole MEAN area. I’ve run a small website on a service provider’s hosting platform before. The details are different but the concepts seem pretty much the same, albeit I’m taking care of more of it myself.

Thanks.

Chip Pinkston

I just tried to get this up and running last night but I ran into a bunch of issues with npm and bower. Based on the comments here, I went the route of installing nvm, and tweaking a couple of other things to make the whole process work. The following are all of the commands I ran after the docker command to create (and connect to) the mymeanjs container. I did all of these before the git clone.