Dynamic Environments with Kubernetes

In this series of blog posts, I will highlight some strategies and tips when adopting Kubernetes. The goal is to provide practical examples based on usages of other companies who have already gone down this road.

—

When adopting a new technology, such as Kubernetes, we often plug it in, use the basic features, and continue our development process as usual. However, in many cases we can leverage the features of these new technologies to solve our old problems in better, more efficient ways. Deploying Kubernetes to run and manage our applications is a good start, but we can go further, looking for ways to improve our whole development cycle.

The problem I will focus on here is managing multiple environments. Most organizations have a variety of different environments, such as production, staging, testing, development etc. They generally come either with strict access and security controls in terms of who can deploy what where, or else on the other end of the spectrum, they are wide open, with all users given free reign. I have worked in both of these types of organisations and neither is ideal. In the former case, the rigidity and controls put in place result in many wasted hours by developers who need to submit requests to a Configuration Management or Deployment team. And in the latter case, the environments tend towards becoming a mystery as to which versions of which services are running on them.

There are several challenges around creating and maintaining these environments, the first is that we want them to be as close as possible to mimicking production. This way as we develop and test new features we can feel more confident that things will behave the same way once we go live. The longer these environments hang around the more likely they are to diverge from our production setup. On top of this, maintaining several environments at a one-to-one parity with production can be far too costly in terms of resources.

How can we leverage an orchestration platform to solve this for us? We can take the idea of immutable infrastructure and apply it one level higher, creating dynamic environments on demand. If we don’t need these environments up all the time, then why not just bring them up on demand.

There are some features in Kubernetes which make it easy for us to do just this. The two main ideas for this setup are sharing infrastructure, not just the servers, but the kubernetes cluster itself, and second, creating and then deleting environments on the fly.

The main feature we can use to support this is namespaces. The documentation states: “Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.”. Because they are virtual clusters, namespaces are very quick to create and also to clean up. Deleting a Kubernetes namespace will also delete all the resources within the namespace.

Let’s see how we can incorporate this idea into an existing Continuous Integration Pipeline:

For the sake of a demo we will use the Sock Shop (https://microservices-demo.github.io) reference application. We’ll create a CI pipeline to build our own service, and then deploy it along with the Sock Shop application into a dynamically created namespace in our Kubernetes cluster. We’ll verify the build by running some integration tests, and when successful, throw away the entire environment (namespace).

You can find the service and build and deploy scripts in the following repo: https://gitlab.com/iandcrosby/continous-socks

The pipeline is defined inside the .gitlab-ci.yml file, it defines some variables and the stages of our pipeline:

1

2

3

4

5

6

7

8

9

variables:

IMAGE_NAME:"${CI_REGISTRY}/iandcrosby/${CI_PROJECT_NAME}"

NAMESPACE:"${CI_PROJECT_NAME}-${CI_BUILD_REF_NAME}-${CI_BUILD_REF}"

stages:

-build

-deploy

-test

-cleanup

The first stage is the build, where we build our docker image based on the latest commit, we will tag our image with the build info and push it to our registry:

1

2

3

4

5

6

7

8

9

10

build:

image:docker:latest

services:

-docker:dind

stage:build

script:

-echo"$KUBE_URL"

-docker login-ugitlab-ci-token-p"$CI_BUILD_TOKEN""$CI_REGISTRY"

-docker build-t"$IMAGE_NAME:${CI_BUILD_REF_NAME}-${CI_BUILD_REF}".

-docker push"$IMAGE_NAME:${CI_BUILD_REF_NAME}-${CI_BUILD_REF}"

(Note: The KUBE_* variables are made available via the GitLab Kubernetes integration.)

The deploy stage will create a new namespace based on the project name and the build (this guarantees each namespace to be unique), we then create a deployment config for our newly built image from a template and deploy it to the new namespace. We also deploy any dependencies we need for running our integration tests, in this case we deploy a subset of the Sock Shop.

With such a solution, we remove the need for a classical ‘Integration environment’. Which is not only a waste of resources (keeping it up and available 24/7) but also, these environments tend to diverge further from the source of truth (production) the longer they live. Since our short lived environments are created on demand, from the same sources we use to create our production setup, we can be confident we are running a near-production like system.

*In order to properly benefit in terms of cost savings, you will need to have auto scaling setup on your cluster. As we usually pay by the instance, our cluster needs to add and remove machines as needed.*

The above example is only a demo meant to show how this functionality can be used. I have worked with several organizations who have implemented similar setups. This is just the first step, the questions that usually come next surround access control and security. How can we limit access to certain environments? How can we ensure some memory hungry applications on one environment do not impact the rest? In the following blog post I will take the above example and address these concerns by leveraging RBAC, Network Policies and Limits.

We are running a workshop next year. Click on the link or image below. The winter sale ends on the 31st of December, 2017.