Isolated Development Environments with Docker

(4.0)

| 1498 Ratings

For ages, it has been a common phenomenon that an application or software that is running absolutely smoothly in your machine at home is not working in the machine at your office. So you have to spend hours debugging problems and trying if it would work. As most applications are developed in local development environments, this has been a common problem. However, an Isolated Development Environment can easily solve this problem.

What is Isolated Development Environment?

An Isolated Development Environment has three features.
1. It is independent, that is, it doesn’t affect any project and is not affected by other projects.
2. You only have to write it once and it will work every time.
3. It is nearest to production, which means that there is no need to worry that application will change due to repeated testing.

Such Isolated Development Environment is given by Docker. Docker lets you run your application anywhere and you only have to do it once, not repeatedly like those applications developed locally. Some people may argue that Virtual environments provide similar features, but it takes up lots of resources. Docker, on the other hand, has lightweight containers which are faster.

Docker has become very popular among developers. Usually it takes a developer half or a day to set up a new environment for development. But with Docker, it is much faster and hassle free.

Learn how to use Docker, from beginner basics to advanced techniques, with online video tutorials taught by industry experts. Enroll for Free Docker Training Demo!

Benefits of Using Docker for Isolated Development Environments

Using Docker for creating isolated development environments has many advantages.

Reproducibility

Docker container can be produced again and again. That is, local Docker images can be cleared in order to build these again. As a result, back up is not required. For Docker’s self-caching mechanism, creating containers again doesn’t take much time either. The following code clears all docker containers present on the local system: dockerrm `docker-ps –no-trunc -a -q`

File System Integration

Personal development tools only need to be installed once in the host computer, after which it can be used on multiple environments. Files can be edited using gvim. These files will be simultaneously updated in the Docker container. All the source codes will be available in a single directory. For sharing files, the simplest method is to expose the directories. This can be done by the help of the –v option of docker run. A permanent Docker data volume can be used which is easier to carry around than the –v option. To do this, you have to use the particular codes, assuming busybox to be a lightweight image for container.

Caching Mechanism

Reproducing environments refers to setting up the software repeatedly. This makes your computer slow as it needs to be downloaded from the source again and again. Docker has a special caching mechanism which allows the software to be installed from a local cache instead of the remote server. For example, in Python, virtualenv is used to make isolated Python environments, and the task is made simple by pip’s installer cache which stores copies of the downloaded software. This can be done using the following command:

Using Docker to Create an Isolated Development Environment

If you have a Dockerfile, Docker can read instructions from it and build images by itself. Using the command Docker build you can create image easily and sequentially.

For example, assume, we need to create a Docker container which will be able to create and test a lightweight application framework like Vert.x HTTP server. We consider Docker is installed already and that the operating system is Linux.

You have applied commands to install tools like openjdk, wget and git; installs vertx and includes the folder; and have built folder making it the default directory. After copying the Dockerfile, Docker image can be formed using the command:
$ -sudo-docker, –build t = ‘vertxdev’
Git is already set up in the image created and it can be used to get the source code from Github.
Once the source code is available, a new container can be turned that creates and runs vertx application called DearEarthServer. The command vertx run is used to both create and implement vertx application.

Using Docker and Fig to Create an Isolated Development Environment

Using Docker alone to make, boot and manage, can be complex. Fig, when used with Docker, creates isolated development environments faster and in a much easier fashion. Fig helps to run many docker containers. Fig accumulates the entire configuration in a single file fig.yml. This file is responsible for making containers, running these containers, advancing their ports, sharing their volumes and linking them together.

Fig Commands

To configure the Docker container setup, the following Fig commands to be used:

up: builds new containers and runs that are already available

rm: gets rid of containers

ps: examines the containers and provides easy entry to logs

Let’s look at an example to see how Fig can be used, along with Docker, to set up an isolated development environment.

Consider two databases, Post Gres (9.1) and Elastic-Search (1.1). Redis 2.8.3 will be used for caching, and Python-powered Flask app will be used for running the databases. Each of these is in different containers. The following commands will create a link between the two containers and show that these two containers are dependent on each
other. Redis images will be pulled from the Docker Hub, Python app will be formed and the containers will start in the right order.

Service Options

Every service has a number of options, including:

Volumes: Allows sharing of folders between Docker containers and host computer. So any changes that occur in one, the other is automatically updated.

Ports: Reveals ports between the two entities, that is, the container and the host. environment: Allows configuration of environment variables, for example, configuring database names.

Links: Creates link between the host and the container.

Some Other Useful Fig Commands

In order to stop the services running, either use Ctrol+C or run fig stop in a different window. For removing containers related to services, run fig remove [services].

If you run fig up again, it will reboot the previous containers with updates, if made any. If you want to edit any Dockerfile of built services, don’t use fig up as it won’t work. Instead, use fig build [SERVICE] which will rebuild the image.
You can also run one-off commands using the code fig run [SERVICE] [COMMAND], for example, fig run web python.

Using Docker and Vagrant to Create an Isolated Development Environment

Vagrant, an open-source software, gives a procedure for building development environments that can be used over and over again across various operating systems. Virtualbox is the default provider. Vagrant only needs to be installed once and it can run in any operating system. Vagrant can run a number of containers simultaneously and also can create a link between them.

Docker can be used along with Vagrant in order to set up an isolated development environment. First, you need to install Virtualbox and Vagrant in your computer. Assume we have two Docker containers: vertx run and git clone. We also assume to use the application DearEarthServer, like before.

How to Customise the Docker Host?

One of the major advantages of Vagrant is that it specifies a custom Docker host, that is, we are not bound to rely on boot2docker and TCL.

A new Vagrant-file will be used to explain the Docker host VM.

Use the file name Docker Host Vagrant-file to save it in the Vagrant-file folder. Next, Docker containers must be run on custom Docker host. Type the following code to mention using the VM as Docker host.

Using Vagrant to Group Docker Containers

A number of Docker containers can be executed simultaneously using the Vagrant multimachine environment. Let’s consider two Docker containers in the Vagratfile: ‘vertxreceiver’ and ‘vertxsender’.

List Of MindMajix Docker Courses:

Subscribe For Free Demo

Phone *

E-mail Address *

Free Demo for Corporate & Online Trainings.

About The Author

Vinod M is a Big data expert writer at Mindmajix and contributes in-depth articles on various Big Data Technologies. He also has experience in writing for Docker, Hadoop, Microservices, Commvault, and few BI tools. You can be in touch with him via LinkedIn and Twitter.

Related Articles

Mindmajix - Online global training platform connecting individuals with the best trainers around the globe. With the diverse range of courses, Training Materials, Resume formats and On Job Support, we have it all covered to get into IT Career. Instructor Led Training - Made easy.