Technologies

As before we can install docker for our platform of choice. We will be using docker utility to create our services.

For our backend stack we can use whatever technology we want to create our APIs and web apps. We will be creating our backend services using NodeJS for the server side technology, and MongoDB as our database. But we are not restricted to these technologies and can use PHP, JAVA, MySQL, PostgreSQL, etc.

NodeJS

NodeJS is a server side Javascript framework that allows us to create anything from web apps, to REST APIs, to mobile apps. It is a high performance framework using non-blocking I/O. It can handle tens of thousands of operations per second (if programmed correctly ;-)).

MongoDB

MongoDB is an advanced NoSQL database that is meant for big data. It is also easy to deploy and use. Although its flexibility, if not carefully managed, can easily get out of hand. But we will be careful when persisting our data.

We are keeping it simple with this backend service. This will be a simple program that prints Hello Wold! on a web browser, connects to the MongoDB service running on the cluster, and sends logs to our logging stack.

The index.js file at the root of the application simply requires the app and controllers.

require('./src/app');
require('./src/controllers');

This will run the code in both ./src/app/index.js and ./src/controllers/index.js.

We will use this file to build an image called nodeapp1. Similarly to how we built the Logstash image in the previous post.

./src/app Directory

Inside the ./src/app directory, config has nothing in it. If our application had any configuration, like any settings specific to production environment, we could put it in here. Here is the ./src/app/index.js file:

This file sets up our express app, mongo connection, and logger for us. const logger = require('../libs/logger'); contains our logging library. We use the excellent winston library as our logger. There are a lot good reasons to use winston, here one of our main reasons is the Logstash transport. We will later examine the logger in detail.

We have to use two Express instances because we are going to access the app using /app1. So, we create two instances, one to create all of our routes, and the other to tell the proxy that /app1 si the access point. Otherwise, we would have to prepend /app1 to every route that we create.

Then we use the mongodb driver’s MongoClient to connect to our mongo service running on the cluster, with the nodeapp collection. We set the autoReconnect option to true because we don’t know if the mongo service will be ready before our app tries to connect. If we do successfully connect, we send a log out using the logger. If we can’t, we log out the error.

We use app.set('trust proxy', true); be cause are placing the app behind a proxy. The rest of the middleware is pretty standard Express affair. We use Nunjucks as our templating engine.

It is important that we start the server the with second instance of Express.

Here we use winston to create our initial logger which only logs to the console. However, when the NODE_ENV environment variable set to production, as it is in our Dockerfile, we will add the Logstash transport to our logger. The logger also handles exceptions and makes them human readable.

./src/public

The public directory can contain any static that our app may have. We have setup our Express app to access these public resources using the /public URL path.

./src/views

Finally, our views directory contains all of our templated views. The index.html is as follows:

Here we define a few blocks for css, content, and scripts. This file doesn’t really have anything else going for it.

Docker Services

Now we need to build a Docker image out of our Node app and then deploy to the cluster. To build we can use a command similar to the one we used to create our own Logstash image. We make sure that we are in the root of our app and execute the following:

docker build -t <registry username>/nodeapp:latest .

Once the image is built, we can push it to our registry of choice. As before we will simply use hub.docker.com as our registry. We can login to Docker hub using the following command:

docker login

Then use the following command to push the image to the hub:

docker push <registry username>/nodeapp:latest

Now that our image is pushed. We can start creating our services and related networks.

Our Mongo service only needs to connect to the apps network. We mount a volume to it so the persisted data is not lost when the service is destroyed. Finally, we need to add constraints to make sure it only starts on the worker nodes, and only those worker nodes that aren’t dedicated to our logging stack.

Our app1 service must connect to all three defined networks because it must communicate to services in all of those networks. We add the required labels to notify our swarm listener of the configuration of our app. And we use the same constraints as before to make sure we only start service on the right worker nodes.

We can verify our node app is running by visiting customdomain.com/app1. We should see Hello World! written on screen. And we can verify that our logger is working by visiting logging.customdomain.com and checking Kibana. As we can see from the messages below, the logger is sending the logs to Logstash correctly.

Done

We have finally completed the original infrastructure that we wanted to create. Now we have communication going back and forth between the frontend, backend, and logging services.

Up Next

In the next post I will document some of the challenges, tips, tricks, and limitations of the current setup.