Docker Hub: Distribution of containers

Now that you’re reading this guide, you might be interested, or maybe you just want to see these examples working live. Well with Docker, you can very run these container images. In case you have the Docker Toolbox installed, this should be very easy. You just need to have access to my containers. Enter Docker Hub! Docker Hub is like Github but for Docker images.

The Docker Hub is a public registry maintained by Docker, Inc. It contains images you can download and use to build containers. It also provides authentication, work group structure, workflow tools like webhooks and build triggers, and privacy tools like private repositories for storing images you don’t want to share publicly.

Let me first show you how you can add your images to the Docker Hub, afterwards I will show you how to checkout these images.
First, we are going to add an Automated build repository in Docker Hub. For that, we first need to push the code to Github. If you followed this guide, you should have done this by now.

Adding images to Docker Hub

We will need to have a working images, which you will have when you have done the previous chapters.

Next, we will link our Github account with Docker Hub to add an automated build repo. You will need a Docker Hub account: https://hub.docker.com/login/

Choose between; public & private or limited access. The “Public and Private” option is the easiest to use, as it grants the Docker Hub full access to all of your repositories. GitHub also allows you to grant access to repositories belonging to your GitHub organizations. If you choose “Limited Access”, Docker Hub only gets permission to access your public data and public repositories.

I choose public & private, and once I am done with that, it forwards me to a Github page. (I’m logged in on Github), which asks me to grant permission, so Docker Hub can access the Github repositories:

Now go back to your DockerHub dashboard, and click on the Create > Create Automated Build from the dropdown, which you will see next to your account name, in the top right:

Select Create Auto-Build Github, select your Github account, and then select the repository:docker-ext-client, enter a description of max 100 characters and save. Redo these steps as well for docker-node-server.

Once the Automated Build is configured it will automatically trigger a build and, in a few minutes, you should see your new Automated Build on the [https://hub.docker.com/](Docker Hub) Registry. It will stay in sync with your GitHub and Bitbucket repository until you deactivate the Automated Build.

Now go to Build Settings. You should see this screen:

You could click the Trigger button, to trigger a new build.

Automated Builds can also be triggered via a URL on Docker Hub. This allows you to rebuild an Automated build image on demand. Click the Active Triggers button.

Creating an automated build repo means that every time you make a push to your Github repo, a build will be triggered in Docker Hub to build your new image.

Make sure, when committing the docker-ext-client app to Git, that you will check in the production build/production/Client folder, as this folder will be used by the Docker images, not the folder with your local Sencha (class) files.

Running images from Docker Hub

Now that we know, how we can add Docker images to the Docker Hub, let's checkout some images.

First download the image from the Docker Hub:

$ docker pull savelee/docker-ext-client

Then run the new Docker image

--name = give your container a name--p = bind a port to the port which is in the Dockerfile-d = the image name you like to run

Conclusion

The last part of the tutorial focussed on publishing Docker images to the Docker Hub. If you followed all the tutorials of this 8 series, you've learned the following:

Full stack JavaScript for the enterprise with JavaScript on the front-end (with Ext JS 6).

Node.js on the back-end

A NoSQL database with MongoDB and Mongoose

About Docker, and how to create containers

How to link Docker containers with Docker Compose

How to publish Docker images with Github and Docker Hub

The best part of this all, is that you can easily swap one technology for another. For example, I could link new Docker images, with Ext JS 6 on a Python/Django with MySQL environment, or an Angular 2 app on Node.js with CouchDB...

Docker Compose: Linking containers

Docker Compose is a tool for defining and running multi-container Docker applications.

Docker is a great tool, but to really take full advantage of its potential it's best if each component of your application runs in its own container. For complex applications with a lot of components, orchestrating all the containers to start up and shut down together (not to mention talk to each other) can quickly become confusing.

The Docker community came up with a popular solution called Fig, which allowed you to use a single YAML file to orchestrate all your Docker containers and configurations. This became so popular that the Docker team eventually decided to make their own version based on the Fig source. They called it: Docker Compose.
In short, it makes dealing with the orchestration processes of Docker containers (such as starting up, shutting down, and setting up intra-container linking and volumes) really easy.

So, with Docker Compose you can spin off various Docker images, and link it to each other.
That’s great, because in case you ever decide to get rid of the Node.js back-end, and instead like to make use of something else; let’s say Python with Django; you would just link to another images.

You will use a Compose file (docker-compose.yml) to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.
For more information, see: https://docs.docker.com/compose/overview/

Remember, how we wrote in our client Sencha app, URLs to the Node.js back-end? We hardcoded it to the localhost URL. Now this won’t work. When the container is running, it won’t know localhost, only it’s own ip address.

Let’s figure out what the docker machine ip address is. While you are still in the Docker terminal, enter the following command:

$ docker-machine ip

We will now need to change the Sencha URLs. You could hardcode this to the Docker machine ip, or you could let JavaScript detect the hostname, you are currently using. (Remember, our Node server is on the same host as our Sencha app, it just has a different port.)

You will need to build the Sencha app, before moving on with Docker. We will copy the Sencha build directory over to our container, and this one needs to be finalized, concatenated and minimized, to leverage performance while serving the page.
(Manually copying builds over to folders can be automated too, btw. Take a look in one of my previous posts: https://www.leeboonstra.com/developer/how-to-modify-sencha-builds/)

The Node.js image, we will need to configure, because we need to copy over our own back-end JavaScript code. Therefore create one extra Dockerfile which we create in the server folder.
The contents will look like this:

Woops. There’s a problem with this code. The Node.js server can’t connect to my MongoDB!
This is because it’s trying to connect to Mongo database on localhost, but our Mongo database isn’t on local machine. You could hardcode the container IP ofcourse, in your Node.js script, or you can use environment variables, which are automatiaclly added by Docker, when it links the container: