Pythian - Data Experts Blog » Architecturehttp://www.pythian.com/blog
Official Pythian Blog - Love Your DataSat, 01 Aug 2015 00:28:28 +0000en-UShourly1Loose Coupling and Discovery of Services With Consul — Part 2http://www.pythian.com/blog/loose-coupling-and-discovery-of-services-with-consul-part-2/
http://www.pythian.com/blog/loose-coupling-and-discovery-of-services-with-consul-part-2/#commentsFri, 05 Sep 2014 14:33:39 +0000http://www.pythian.com/blog/?p=68404Creating a Consul Client Docker Image In my previous post, I demonstrated how to create a cluster of Consul servers using a pre-built Docker image. I was able to do this because our use case was simple: run Consul. In this post, however, we will be creating one or more Consul clients that will register...

In my previous post, I demonstrated how to create a cluster of Consul servers using a pre-built Docker image. I was able to do this because our use case was simple: run Consul.

In this post, however, we will be creating one or more Consul clients that will register services they provide, which can then be queried using Consul’s DNS and / or HTTP interfaces. As we are now interested in running Consul and an application providing a service, things are going to get a bit more complicated.

Before proceeding, I’m going to need to explain a little bit about how Docker works. Docker images, such as progrium/consul we used in the previous post, are built using instructions from a special file called Dockerfile. There are two related instructions that can be specified in this file which control the container’s running environment: that is, the process or shell that is run in the container. They are ENTRYPOINT and CMD.

There can be only one ENTRYPOINT instruction in a Dockerfile, and it has two forms: either an array of strings, which will be treated like an exec, or a simple string which will execute in ‘/bin/sh -c’. When you specify an ENTRYPOINT, the whole container runs as if it were just that executable.

The CMD instruction is a bit different. It too can only be specified once, but it has three forms: the first two are the same as ENTRYPOINT, but the third form is an array of strings which will be passed as parameters to the ENTRYPOINT instruction. It’s important to note that parameters specified in an ENTRYPOINT instruction cannot be overridden, but ones in CMD can. Therefore, the main purpose of CMD is to provide defaults for an executing container.

It’s probably becoming clear to you by now that Docker images are designed to run one process or shell. We want to run two processes, however: the Consul agent and an application. Thankfully, there is an image available called phusion/baseimage that provides runit for service supervision and management, which will make it easy for me to launch Consul and another service (such as nginx) within my containers.

SIDEBAR: There is quite a bit of debate about the intended / recommend use of Docker, and whether the process run in the container should be your application or an init process that will spawn, manage and reap children. If you’re interested in reading more about the pros and cons of each of these approaches, please refer to Jérôme Petazzoni’s post, phusion’s baseimage-docker page, and / or Google the topics of ‘separation of concerns’ and ‘microservices’.

Now that I’ve provided some background, let’s get into the specifics of the Docker image for my Consul clients. I’ll begin with the full contents of the Dockerfile and then describe each section in detail.

The first section specifies that my image will be based on that of phusion/baseimage, and that I am the maintainer of my image. So far so good.

Next, I am removing the SSHD service from the container. This is part of phusion’s image and is not something I am interested in using for the purposes of my demonstration.

The next step is to install nginx and should look fairly straight forward. I have taken the liberty of installing zip at the same time, as I will be using it later on to install Consul.

The VOLUME instruction lets me define mount points that can be used for mounting volumes in the container, passed as arguments of the docker run command. I am not actually using this in my demonstration, it is just there to make you aware of the capability.

Next I am telling nginx not to daemonize itself, and am adding an nginx configuration for runit. The ADD instruction adds a local file to the image in the specified path. The runit configuration I am adding is pretty simple and looks like this:

#!/bin/sh
exec /usr/sbin/nginx -c /etc/nginx/nginx.conf 2>&1

Now that I am done with nginx, I want to install and configure Consul. I simply retrieve the binary package and extract it into /usr/bin in the image. I then use another ADD instruction to supply a configuration file for Consul. This file is in JSON format and tells Consul to register a service named ‘nginx’.

{
"service": {
"name": "nginx",
"port": 80
}
}

I then use an ADD instruction to supply a runit configuration for Consul in the same manner I did for nginx. Its content is as follows:

With all of the hard stuff out of the way, I now define the nginx and Consul ports to EXPOSE to other containers running on the host, and to the host itself.

And last but not least, I set the HOME environment variable to /root and set the init process of /sbin/my_init as the ENTRYPOINT of my container.

This creates a good foundation. If we were to run the image as is, we would end up with nginx running and listening on port 80, and Consul would be running as well. However, we haven’t provided Consul with any details of the cluster to join. As you have probably guessed, that’s what CONSUL_FLAGS is for, and we’ll see it in action in the next section.

Creating Consul Clients With Vagrant

So far we’ve gone to the trouble of creating a Docker image that will run Consul and nginx, and we’ve supplied configuration to Consul that will have it register nginx as a service. Now we’ll want to create some clients with Vagrant and see querying of services in action.

Let’s start by modifying our Vagrantfile. Just as was done with the Consul servers, we’ll want to create an array for the nginx members and tell Vagrant to use the Docker provider. This time, however, some additional configuration will be necessary. The full Vagrantfile is now going to look like this:

Note that this time we are not using docker.image to supply the name of an existing Docker image to use for our containers. Instead, we are going to use docker.build_args and docker.build_dir to build our own.

docker.build_args = [ "-t", "bfraser/consul-nginx", "--rm=true" ]

This is a list of extra arguments to pass to the docker build command. Specifically, I am naming the image bfraser/consul-nginx and telling Docker to remove intermediate containers after a successful build.

docker.build_dir = "."

This should be fairly self-explanatory: I am simply telling Docker to use the current working directory as the build directory. However, I have some files (including the Vagrantfile) that I do not want to be part of the resulting image, so it is necessary to tell Docker to ignore them. This is accomplished with a file called .dockerignore and mine looks like this:

.git
.vagrant
Vagrantfile

Next, I am using docker.cmd to pass /bin/bash as an extra parameter to the image’s ENTRYPOINT, which allows me to have a shell in the container. A little later, I will show you how this can be useful.

The next line:

docker.create_args = [ "--dns=#{JOIN_IP}", "-t", "-i" ]

is a list of extra arguments to pass to the ‘docker run‘ command. Specifically, I am providing a custom DNS server and instructing Docker to allocate a TTY and keep STDIN open even if not attached to the container.

Lastly, I am supplying a hash to docker.env which will expose an environment variable named CONSUL_FLAGS to the container. The environment variable contains additional parameters to be used when starting Consul.

With this configuration in place, we can now use Vagrant to create three additional containers, this time running Consul and nginx.

$ JOIN_IP=172.17.42.1 vagrant up --provider=docker

This time if we check the output of ‘consul members‘ we should see our host and six containers: three Consul servers and three nginx servers functioning as Consul clients.

Querying Services

As I mentioned in ‘Where Does Consul Fit In?’ in my original post, Consul is a tool for enabling discovery of services, and it provides two interfaces for doing so: DNS and HTTP. In this section, I’ll show you how we can use each of these interfaces to query for details of services being provided.

First, let’s use the HTTP interface to query which services are being provided by members of the Consul cluster.

We can see here that there are three nodes providing the nginx service, and we have details of the IP address and port they are listening on. Therefore, if we were to open http://172.17.0.18 in a web browser, we would see the ‘Welcome to nginx!’ page.

Notice how the REST endpoint changed between the first and second curl requests, from /v1/catalog/services to /v1/catalog/service/nginx. Consul provides extensive documentation of the various REST endpoints available via the HTTP API.

While the HTTP API is the most powerful method of interacting with Consul, if we are only interested in querying for information about nodes and services, it is also possible to use its DNS server for simple name lookups. Querying for details of the nginx service via the DNS interface is as simple as running the following:

As you can see, while it is certainly possible to develop a client to tightly integrate with Consul through its API, it is also easy to take advantage of its DNS interface and not have to write a client at all.

Attaching To A Docker Container

I have one last tip, which is especially useful if you are new to Docker: how to attach to your containers.

I mentioned earlier in this post that I was including the following line in my Vagrantfile:

docker.cmd = [ "/bin/bash" ]

What this does is pass /bin/bash as an extra parameter to the image’s ENTRYPOINT instruction, resulting in the /sbin/my_init process spawning a bash shell.

I also instructed Vagrant, via the docker.create_args line, to have Docker allocate a TTY and keep STDIN open even if not attached to the container. This means I can attach to my containers and interact with them through a bash shell as follows (note: press the ‘Enter’ key following the command to get the prompt):

$ docker attach nginx1
root@4b5a98093740:/#

Once you are done working with the container, you can detach from it by pressing ^P^Q (that’s CTRL-P followed by CTRL-Q).

Summary

With that, we have reached the end of my demonstration. Thanks for sticking with me!

First I described the importance of loose coupling and service discovery in modern service-oriented architectures, and how Consul is one tool that can be used for achieving these design goals.

Then I detailed, by way of a demonstration, how Vagrant and Docker can be used to form a Consul cluster, and how to create a custom Docker image that will run both your application and a Consul agent.

And, last but not least, I showed how you can make use of Consul’s HTTP API and DNS interface to query for information about services provided.

Hopefully you have found these posts useful and now have some ideas about how you can leverage these technologies for managing your infrastructure. I encourage you to provide feedback, and would be very interested in any tips, tricks or recommendations you may have!

]]>http://www.pythian.com/blog/loose-coupling-and-discovery-of-services-with-consul-part-2/feed/3Loose Coupling and Discovery of Services With Consul — Part 1http://www.pythian.com/blog/loose-coupling-and-discovery-of-services-with-consul-part-1/
http://www.pythian.com/blog/loose-coupling-and-discovery-of-services-with-consul-part-1/#commentsWed, 03 Sep 2014 14:16:00 +0000http://www.pythian.com/blog/?p=68392Today I would like to demonstrate the use of Consul to help achieve loose coupling and discovery of services, two important principles of service-oriented architecture (SOA) present in modern, elastic infrastructures. If you have ever designed an architecture to be deployed in a cloud computing environment, then you are probably already familiar with the idea...

]]>Today I would like to demonstrate the use of Consul to help achieve loose coupling and discovery of services, two important principles of service-oriented architecture (SOA) present in modern, elastic infrastructures.

If you have ever designed an architecture to be deployed in a cloud computing environment, then you are probably already familiar with the idea that you should design for failure. Having the expectation that one or more components of your infrastructure may fail at any given time forces you to design your services in such a way that they are loosely coupled from one another. This usually means running multiple instances of web, application, and database servers, and making use of load balancers, message queues and / or APIs for communication between them.

The environment is scaled horizontally, by adding more instances to the pool. Because of this, instances need to be able to advertise the service they provide, and discover providers of other services.

Where Does Consul Fit In?

Consul’s introduction page does an excellent job of describing the features it provides and what its basic architecture looks like, but I’ll provide a brief summary here.

Consul is a tool for discovering and configuring services in your infrastructure. At its core, it provides service discovery, health checking, and a key/value store. It supports multiple data centers without having to add a layer of abstraction. Nodes that provide services run a Consul agent which talks to one or more Consul servers. Services or nodes can be discovered by querying any of the Consul servers or agents. Data is exposed through DNS and / or HTTP interfaces.

Its capabilities overlap with a number of different types of software including: service discovery systems such as ZooKeeper and etcd; configuration management systems such as Puppet or Chef; and monitoring systems such as Nagios or Sensu. Again, there is an excellent page on the Consul website that describes how it compares with these tools.

The Demo Environment

For my Consul demonstration, I elected to use a combination of Docker and Vagrant. Docker because it makes it easy for me to run multiple lightweight containers on the same machine, and Vagrant because it gives me a great deal of flexibility in building and controlling the containers I will be creating. If you are unfamiliar with either of these tools, allow me to provide a brief overview.

Docker

Per Wikipedia, Docker “.. is an open-source project that automates the deployment of applications inside software containers”. It consists of: Docker Engine, a container that runs on the host operating system; and Docker Hub, a cloud service similar to GitHub which allows users to share containers and automate workflows.

Docker makes use of the following features of the Linux kernel: cgroups (or control groups), which make it possible to limit and isolate resource usage (CPU, memory, etc) of process groups; and namespace isolation, where process groups’ view of operating environment resources such as process trees, network, and file systems are isolated from one another.

Vagrant

Vagrant is a tool for building complete, reproducible development environments, with a focus on automation. It started out as essentially a ‘wrapper’ for VirtualBox, but now supports VMware, Docker, kvm/libvirt and Amazon EC2 as providers, as well as a number of provisioners including Puppet, Chef, and Docker.

Vagrant was created by Mitchell Hashimoto, who initially maintained it in his free time. After it went on to become wildly successful, he formed HashiCorp so he could work on the product full time. HashiCorp has since released other products including Packer, Serf, Consul, and most recently Terraform.

Installing The Tools

Docker

Ubuntu 14.04 LTS includes Docker in its universe repository under the name docker.io and can therefore be installed this way:

$ sudo apt-get update
$ sudo apt-get install docker.io

Keep in mind that this may not be the latest version of Docker. If you would like to try the latest version, you will need to add the Docker repository key and add the repository to apt’s sources list. A script has been made available to automate this process for you, so all that is necessary is to run the following:

$ curl -sSL https://get.docker.io/ubuntu/ | sudo sh

On Fedora, Docker is provided by the package named docker-io. If you are running Fedora 19 and have the (unrelated) docker package installed, you will need to remove it before continuing.

$ sudo yum -y remove docker

With Fedora 21 and later, the docker package’s functionality is provided by another package named wmdocker, and it does not conflict with docker-io. Use the following command to install Docker.

$ sudo yum -y install docker-io

Whichever operating system you are running Docker on, you will likely want to be able to use the commands as your regular, non-privileged user, without having to elevate privileges to root. Therefore, you will probably want to make yourself a member of the docker group so you have access to the socket file used by the various Docker commands.

$ sudo usermod -a -G docker your_name

Vagrant

For Vagrant, you will need at least version 1.6.0 as that is when the Docker provider was introduced. This demonstration was tested with version 1.6.3. To install Vagrant, visit its download page and obtain the appropriate package for your operating system. You can install the package on Ubuntu using the following command:

$ sudo dpkg -i vagrant_1.6.3_x86_64.deb

That’s it. In the next section, we will install Consul and continue with setting up our cluster.

Setting Up The Consul Cluster

Let’s begin by establishing the Consul cluster, which will be used by nodes to register themselves as providers of services, and can be queried to discover which services are being provided.

The first step is to download and install Consul on the host which will be running the Docker containers.

This server will elect itself leader of the cluster (since it is currently the only member), and is what the rest of the Consul servers will connect to when joining. Ultimately, the architecture will look like this:

Before we can bring up the rest of the Consul servers, however, we’ll need to do some prep work. As mentioned above, the intent is to use Vagrant with the Docker provider to create this environment. Therefore, let’s begin by creating a working directory to contain our Vagrantfile.

What should be self evident is that Vagrant is assigning the value of an environment variable called JOIN_IP to a local variable with the same name, and then enumerating through an array of three Consul members which it will create using the Docker provider.

What might not be so obvious, however, are the docker.image and docker.cmd lines, so I will explain them in greater detail.

docker.image = 'progrium/consul'

This line tells Docker to launch (or ‘run’) an instance of the progrium/consul image as found on the Docker Hub Registry. It is a small container based on BusyBox used to run Consul. The project’s source page can be found on GitHub if you’re interested in learning more. The next line:

docker.cmd = [ "-server", "-node=#{member}", "-join=#{JOIN_IP}" ]

is an array of strings which is used to build a custom command to run on the container. Vagrant will perform string interpolation on the member and JOIN_IP variables, replacing them with the current member’s name and the IP address that was provided via the JOIN_IP environment variable. The end result is that the container runs a command such as this:

As you can see, we have successfully created a cluster of Consul servers, comprised of an initial server running on the host and three servers running as Docker containers created by Vagrant. In my next post, we will see how to add Consul clients providing services (‘providers’), register the services, and query available services from clients (‘consumers’).