Bootstrapping Jenkins in a Kubernetes cluster

In our first post we walked through a simple way of creating a Kubernetes cluster in AWS using Tack. Now that we’ve got our cluster up and running, let’s look at how we can take one of the first steps towards our goal of a completely codified CI/CD environment by setting up and deploying Jenkins to Kubernetes to perform the CI part of our solution. The repo for our Jenkins deployment is here and the documentation for Jenkins is here.

Jenkins is one of the most widely used CI servers in the industry, however it’s not as well suited for modern, decoupled application development and delivery as newer CI tools are. Modern microservice architectures turn Jenkins into a clicking nightmare with hundreds of jobs, UI boxes to fill in manually, and plugins to install and update. The most painful parts of this process are the provisioning of new services, adding values to existing ones or reconfiguring plugins because all of these changes are done through the UI without a chance to roll back.

One way of solving this problem would be to adopt some other modern CI that allows users to change its configuration in version control and to apply different versions of those changes to the environment. However that would mean both learning a new tool and migrating your existing setup.

Another option is to isolate Jenkins from its configuration, migrate it into a containerised environment and make it “stateless”. This enables Jenkins to be easily scaled horizontally and provides a clean way of keeping that configuration as code.

Let’s take a look at how we can work towards solving some of these problems using Kubernetes.

Prerequisites

Kubernetes cluster (In this example we are using a local Minikube cluster but you can use any cluster – local or remote)

Starting Up

We’re now going to walk through the process of creating a Jenkins deployment and provisioning this into an environment using Kubernetes. We’re not going to go into the detail of Kubernetes here, this tutorial assumes you already have a basic knowledge of how Kubernetes works and the various terms and abstractions associated with it. If you’re new to Kubernetes or need a refresher, a great place to start is the official conceptual overview. We’ll start by writing a basic deployment so that we can get Jenkins into Kubernetes. Let’s first create and set a namespace for Jenkins. Run the following commands in your terminal:

Now that we’ve got a namespace for Jenkins to live, we need a YAML file that declaratively represents our deployment. The code fragment below should be all you need to start out. Paste this into a new document and save it as something sensible like ‘jenkins.yml’

You can now access the Jenkins UI via http://localhost:8080. You can use CTRL-C in your terminal to exit the port-forwarding session.

Since this is a new Jenkins installation, it will want you to go through the configuration wizard to setup an admin user and plugins. However, as we’re going to provide our configuration from version control, we need to disable this first step. So let’s turn this off. Open your jenkins.yml file again. Follow the YAML tree down through spec -> spec -> containers -> -name: master. Underneath this section, we need to specify the following. Be careful to tab indent this properly, YAML is very fussy!

Now, we need to apply the new deployment and check what’s happened to Jenkins. Again run the below:

kubectl apply -f ./jenkins.yml

If you were to port-forward again to Jenkins now and access the UI, you’d see an empty system – no jobs, no builds, no plugins. We now need to get some configuration into the pod.

Checking out

A better way to store configuration as discussed before is in a version control system. In this example, our VCS of choice is Git + GitHub repository but you could use any VCS you like. To checkout from our GitHub repo, we’ll need SSH keys. Let’s generate some keys (we’re not going to set a passphrase):

Now, we’ve got our secrets in the environment, but checking out a repository with lifecycle hooks won’t help, as Jenkins will already be running by then. To solve this problem we’ll have to utilise an init container. The following init container clone-repo will copy the keys, create an ssh configuration and clone the repository:

There’s a lot of things happening here, so let’s have a look in detail:

We first add a few volume mounts that we need. These contain Jenkins data, secrets and the ssh keys we created earlier.

Next, we copy the ssh keys to the ~/.ssh volume and set their permissions to 400 using cp /etc/secret-volume/ssh-privatekey ~/.ssh/id_rsa;cp /etc/secret-volume/ssh-publickey ~/.ssh/id_rsa.pub;chmod 400 ~/.ssh/*;

We then configure ssh-agent to use these keys and add GitHub to the known hosts printf "host github.com\n HostName github.com\n IdentityFile ~/.ssh/id_rsa\n User jenkins" > ~/.ssh/config;ssh-keyscan github.com >> ~/.ssh/known_hosts;

Finally, we clone the Connect Jenkins repository into the ref-volume cd /usr/share/jenkins/ref && git clone git@github.com:ClearPointNZ/connect-jenkins-bootstrap.git. You can substitute git@github.com:ClearPointNZ/connect-jenkins-bootstrap.git for your own credentials and Git repo if you’d like to use your own repo. You’ll need to ensure your repo has the same structure as ours (or just fork ours!)

Plugging in

Now that our init container is part of our deployment, we’re in a place where we’ve got Jenkins running and the repository with some configuration checked out, but Jenkins can’t really make any use of it yet. Let’s add another init container that’s going to install some plugins to {$JENKINS_HOME}/ref/plugins. Jenkins will then pick these up when our master container starts. Add the following to your jenkins.yml and run kubectl apply -f ./jenkins.yml again.

This will install any plugins that you specify in the file plugins. We’ve provided you with an example which will install the Kubernetes plugin for Jenkins but if you want to use your own, the file should have the following format:

plugin:version

Overriding Jenkins’ Default Configuration

Now that we’ve got the Kubernetes plugin included as part of our deployment, we’ll need to configure it. To do this, we’ll need to override the default Jenkins configuration. The documentation for the Jenkins docker image tells us that copying the file config.xml.override to /usr/share/jenkins/ref/ will suffice. We’ll also need to replace a couple of variables in the file, as Jenkins doesn’t populate them from environment variables. This is where things might get a bit tricky, as our Kubernetes master is in the default namespace. The solution is to provide a configmap with the external URL of the Kubernetes master for our current context. Run the following to create the configmap:

We’re also going to setup authentication for Jenkins as it would be a really bad idea to leave it open to the world! The below will also execute a security.groovy script when Jenkins starts up. This script will setup and save to the home directory the API token and password needed for the admin user to connect to Jenkins. To get the password printed to stdout, we’ll need to grep the Jenkins logs for it. Run kubectl logs deployment/jenkins | grep password and the admin password will be printed to stdout.

Now we can use MY_POD_IP and MASTER_URL as environment variables. Again, make sure your jenkins.yml looks like the below and run kubectl apply -f ./jenkins.yml

Copying over

So, we’ve finally got Jenkins to a place where it can be used for something, but it doesn’t have a job to do. In our repository we have an example job that does nothing. To get this job into a state where it can be deployed from a VCS, we configured it in the Jenkins UI and then copied the {JENKINS_HOME}/jobs folder. To get this job into our Jenkins deployment, we’re going to need another init container. Don’t forget to apply the deployment again to make these changes.

Local vs Remote Cluster

If you’re using the Minikube setup and have followed through this tutorial on your local machine, you’ll probably be OK with port-forwarding to your Jenkins instance to have a tinker. But if you’re deploying into a cloud provider, you’ll want to be able to access your instance using a sensible URL. So let’s set that up now.

We’ve provided a sample service.yml file that will expose Jenkins as a Kubernetes service using a Load Balancer. Since Minikube doesn’t support Load Balancers, this will only work in a remote cluster. Once you’re ready to expose your service, run kubectl apply -f service.yml and this will create a service to expose Jenkins on its standard port 8080.

Hopefully you’ve stuck around till this point – we’re all done! If you want to check after all this copy/pasting that your YAML deployment matches ours, you can download our deployment file from the connect-jenkins-bootstrap repo.

Using Jenkins with an RBAC-enabled Cluster

If your Kubernetes cluster is enabled for Role Based Access Control (RBAC), you’ll need to create a Cluster Role Binding so that Jenkins can use the relevant resources in its namespace. The following command will create a role binding called jenkins-admin with cluster-admin permissions. You may wish to lower these permissions – make sure you read through the RBAC documentation to understand the various roles and permissions available to you in an RBAC cluster.

Note that we’ve used a clusterrolebinding in the above example which gives Jenkins access to all namespaces. You may instead wish to use a role and rolebinding that restricts Jenkins to its own namespace as follows:

Further Improvements

There are several further improvements that we could make to this process, including:

Jenkins Job Builder to define jobs in YAML format – watch this space for a separate post on this!

Export of existing Jenkins state into repository

Stay tuned for our next blog entry where we’ll discuss compile time classpath scanning. One of the key requirements for a containerised solution to be scalable is quick startup time and in this post we’ll discuss some of the ways we architect our apps to ensure they can startup quickly.