Links

Tuesday, 7 April 2015

Running Keycloak cluster with Docker

This is the first of two articles that will describe how to run Keycloak in clustered mode - first with Docker, and then with Kubernetes running on OpenShift 3.

The preferred way to run a Keycloak server - an authentication, and authorization server with support for single sign-on - is to run it as an isolated application in its own process. What you specifically don’t want is run any kind of applications in the same JVM instance, and it’s also not the best idea to run any other publicly facing applications on the same server.

The reason is of course security, but also stability. You don’t want your Keycloak process to suffer security vulnerabilities, or ‘Out of memory’ errors because of another application deployed in the same JVM. It is one thing to lose one application, another - more serious thing - to lose login capability for many applications and services, or even have your Keycloak private keys compromised.

Even an isolated instance, though, can occasionally experience a failure. Therefore, the proper way is to have a cluster of instances with load balancing router or reverse proxy in the front, that detects a failed instance, and diverts traffic away from it. One way to set that up would be to have one production instance at a time, and another failover instance idling until it’s needed. Another, even better way is to use all the running instances as production instances - this way having a horizontal scaling, whereby bringing up more instances linearly increases the number of requests your cluster is capable of handling.

It is this horizontal scaling capability that is the goal of Kubernetes project - the open source solution for provisioning of Docker containers.

In this first article I’ll show how to set up two Keycloak instances in clustered mode, each running in its own Docker container, and using a PostgreSQL database running in another Docker container.

In the next article we’ll enhance that set up by configuring these Docker instances as Kubernetes Pods using OpenShift 3. That will give us a scalable runtime environment where we can remove and add server instances virtually unnoticeable to clients.

Buckle up, and let’s get started.

Installing Docker

The first thing we’ll need is Docker. Docker is a containerization technology - as opposed to virtualization, and is Linux specific. Multiple processes can run each in its own isolated chrooted environment with its own filesystem image, its own IP address within the same bridged network, while they’re all using the host’s Linux kernel - the same kernel process.

Since we can natively use Docker only on Linux, what we do when we’re on Windows or OS X is use a solution that runs a simple, small, headless (no desktop) Linux distribution on VirtualBox - it’s called boot2docker.

If you already use VirtualBox, and have an already created virtual Linux instance you can also use that one. If it includes a desktop it can even simplify things as you can reach Docker containers IP addresses directly from browser. You may want to add another network adapter to your virtual instance - by default there is 'Adapter 1' using NAT, and you should configure 'Adapter 2' to be of type Host-only Adapter. That will simplify connecting from you Windows / OS X host to your Linux guest.

You can find Docker installation instructions for your platform on docker.io site.

Starting Docker daemon

Once you have Docker installed make sure that your Docker daemon process is running. In your Linux shell you can execute:

If you don’t see that, then your Docker daemon is not yet running, and it’s time to start it - you may have to prepend ‘sudo ’ if you’re not root:

service docker start

Using Docker client

We can now use Docker client to issue commands to the daemon. We first have to make sure that our shell environment has some environment variables set to allow Docker client to communicate with the daemon.

One way to provide proper environment is to execute Docker client through sudo or as a root user (su -) on the Docker host system.

Another is to specify an environment variable:

export DOCKER_HOST=tcp://192.168.56.101:2375

Where the IP address is one of the public interfaces on the Docker host system - one that can also be reached from you client terminal (which can be on another host). You can use ifconfig or ip addr to list the available interfaces and their IPs. Note, that docker configures virtual networks that are not directly reachable from another host, here we are not interested in those.

We can now list currently running Docker containers:

docker ps

If this is the first time you’re using Docker, or if you have just started up the daemon, then no Docker container is running yet.

Starting Postgres as Docker container

We’re now going to set up a PostgreSQL database.

Docker uses a central repository of Docker images - each image representing a filesystem with startup configuration for application, and is thus a mechanism to package an application.

We’ll use the latest official PostgreSQL image to start PostgreSQL as a new container instance. You can learn more about it on DockerHub.

That command instructs Docker daemon to download the latest official postgres image from DockerHub and start it up as new Docker container. The -d switch instructs docker client to return immediately, while any processes executed in container keep running in a background.

Additionally we specified several environment variables to be passed to the container which are used to configure a new database, and a new user for accessing the database. Note that we used 'password' - you should really change it to something else!

By using --name postgres we assigned a name to the new container. We’ll use this name whenever we need to refer to this running container in subsequent invocations of docker client.

Note: if this is not the first time you’re working through these steps you may already have a container named 'postgres'. In that case, you won’t be able to create another one with the same name. You have two options - choose a different name for this one, or destroy the existing one using: docker rm postgres

We can attach to the container output using:

docker logs -f postgres

We use -f to keep following the output - analogous to how tail -f works.

We can now check that the database accepts connections, since it will be accessed via TCP from other docker instances.

We can start a new shell process within the same Docker container:

docker exec -ti postgres bash

With this command we don’t start a new container - that would create a whole new copy of the chrooted file system environment with a new IP address assigned. Rather, we execute another process within the existing container.

By using -ti we tell docker that we want to allocate a new pseudo tty, and that we want this terminal’s input to be attached to container. That will allow us to interactively use the container’s bash.

We can find out what the container’s IP address is:

ip addr

We should see two interfaces:

lo with address 127.0.0.1

eth0 with address 172.17.0.x

eth0 will have an IP address within 172.17.x.x network.

This IP address is visible from all other docker containers running on the same host.

Let’s make sure that we are in fact attached to the same container running PostgreSQL server. Let’s use postgres client to connect as user keycloak to the local db:

We’re in fact inside the correct container, and have confirmed that the database is correctly configured with user keycloak.

Exit postgres client with \q. And then exit the shell with exit.

Another way to find out the container’s address is using docker’s inspect command:

docker inspect -f '{{ .NetworkSettings.IPAddress }}' postgres

That should return the same IP address as we saw assigned to eth0 inside ‘postgres’ container.

Testing remote connectivity

We can test that remote connectivity works by starting a new docker container based on the same postgres image so that we have access to psql tool:

docker run --rm -ti --link postgres:postgres postgres bash

We use run, therefore the last postgres argument is not a reference to existing running container, but an id of a Docker image to use for a new container. An extra bash argument instructs docker to skip executing the default startup script (the one that starts up a local postgres server), and to execute the command that we specified - bash.

The --rm argument instructs docker to completely clean up the container instance once the command exits - i.e. once we type exit in the bash.

We have also specified --link postgres:postgres which instructs Docker to add the IP address of existing ‘postgres’ container to ‘/etc/hosts’ file mapped to host name postgres. We can thus use postgres as a hostname, instead of having to look for its IP address.

Run the following:

# psql -U keycloak -h postgres

Password for user keycloak:

psql (9.4.1)

Type "help" for help.

keycloak=#

We have successfully connected to PostgreSQL server on another host.

It is now time to set up Keycloak.

Starting new Keycloak cluster as Docker container

We’ll use a prepared Docker image from DockerHub to run two Keycloak containers, each connecting to the PostgreSQL container we just started. In addition, the two Keycloak containers will establish a cluster for a distributed cache so that any state in between requests is instantly available to both instances. That way any one instance can be stopped, and users redirected to the other, without any loss of runtime data.

Issue the following command to start the first Keycloak container - make sure that environment variables are the same as those passed to postgres container previously:

Docker will download jboss/keycloak-ha-postgres image from DockerHub, and then create a new container instance from it, allocating a new IP address in the process. We used -p to map the port 8080 of the Docker host to port 8080 of the new container so that we don’t need to know container’s IP in order to connect to it. We can simply connect to the host’s port.

Monitor Keycloak as it’s coming up:

docker logs -f keycloak

Let’s now start another container, and let’s name it keycloak2 - this one will get another IP address:

We can see that JGroups cluster was formed over two nodes - in bold. We can also find this container’s IP address in the log - it’s 172.17.0.10 in this case.

Each Keycloak instance can now be accessed from Docker host (where Docker daemon is running) via port 8080 of its container’s IP address. Since we mapped ports 8080, and 8081 of Docker host to Keycloak containers, we can connect directly to these ports on Docker host.

As an alternative we could forego mapping container ports to Docker host’s ports, and instead set up routing / forwarding using Docker host’s iptables - to let the traffic through the firewall, and set up routes on client hosts connecting to those instances to direct any trafic bound for 172.17.0.0/16 through Docker host.