Categories

In my quest for the ultimate tool for continuous integration and continuous delivery pipelines on a Kubernetes cluster, I’ve previously looked at well-known options such as Gitlab and Jenkins. These tools may have integrations with Kubernetes, but are usually anything but lightweight. If you just want to get your code from point A (git) to point B (a production Kubernetes cluster), you may be interested in a new tool named “Brigade“.

Brigade was introduced by Microsoft late last year. It’s an event-driven scripting tool for Kubernetes which aims to make CI/CD pipelines on a cluster easier. Contrary to many other tools, it tells developers to “leave your YAML at home”, instead opting for JavaScript.

In this blog, I’d like to walk you through the steps of setting up your first basic CI/CD pipeline with Brigade.

So what’s different?

Brigade is built from the ground up with Kubernetes in mind. Projects are configured using Helm, triggered using Github (or equivalent) webhooks and executed by running a

1

brigade.js

file in the repo’s root. This file contains event triggers and jobs, which all run in their own Kubernetes pods.

That’s all there is to it. By default there isn’t even a UI, as the focus is on simply running a set of tasks when events (i.e. a git commit) occur.

What do I need?

Get yourself a Kubernetes cluster. If you just want to run Brigade locally, try Minikube. This would require you to expose your local machine to the internet, so use something like localtunnel for that.

Next, let’s find some code to deploy. You could use your own, but if you’d rather use an example project, fork the uuid-generator project from the Brigade tutorials. I’ve forked this repo myself and added the examples below to that. If you want to look at those in advance, see here.

Finally, go through the basic Brigade tutorial, which walks you through creating a Brigade project YAML file (basically a Helm values file; didn’t think we were getting rid of YAML entirely, did you?) and helps you with setting up a Github webhook for your Brigade server. Afterwards, read on for some more advanced stuff to get this code running on your cluster!

Building a container

If you’ve followed Brigade’s basic tutorial, you should now be running unit tests in a Python container on every commit to the uuid-generator repo. But since we like containers, this code by itself isn’t that useful yet.

First, create a Dockerfile with all depencencies needed to run this app and place it in the root of the repo. See my uuid-generator fork for an example.

Next we’ll need to add an event to the

1

brigade.js

file which emits after the previous unit testing job is completed. This can be done by chaining events using promises, like so:

1

2

3

[crayon-5bf3491defc3b170491790]testJob.run().then(()=&gt;{

events.emit("test-done",e,project)

})

[/crayon]

Emitting an event like this ensures that after the “testJob” is completed, a new event is fired. This event can then trigger more jobs. Note that if you don’t use promises, and for instance call

1

run()

on two jobs, they will run in parallel.

Now that this new event is fired, we can go ahead and create the job to build and push a Docker container. We’ll use docker-in-docker (or “dind”) for this, to prevent having to expose the Docker socket on the host.

Also, since we’re pushing to Dockerhub from this pipeline, go ahead and add your Docker credentials at the bottom of your project YAML, under secrets:

1

2

3

[crayon-5bf3491defc43852460240]secrets:

dockerLogin:YOUR_USERNAME

dockerPass:YOUR_PASSWORD

[/crayon]

Update these values on your cluster using

1

helm upgrade uuid-generator brigade/brigade-project-fvalues.yaml

, replacing any values with whatever you’ve named your project and files. Remember not to commit the file! You probably don’t want your Dockerhub credentials on Github.

The end result of this job is a shiny new image on Dockerhub which we can now deploy to Kubernetes. Since this job fires a new event called “build-done”, all we’ll have to do is write the next job and a Kubernetes deployment file.

Deploying to Kubernetes

Whenever

1

kubectl

is executed within a pod in a Kubernetes cluster, it automatically uses the service account bound to the pod. If you don’t have RBAC, by default this means kubectl can do anything on the cluster. This is fine for the purposes of this guide, but if you’re not running this locally, you may want to look into configuring RBAC.

With that out of the way, what we’ll need now is

1

kubectl

in a container, for which you can use my image

1

tettaji/kubectl:1.10.3

. You’ll also need a Kubernetes deployment file. For learning purposes, I recommend you write your own deployment file, or you could use my example in the uuid-generator fork.

Next, let’s create the deployment job:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

[crayon-5bf3491defc58598646735]// Triggers after the Docker image is built

// We'll probably want to do something with a successful deployment later

events.emit("success",e,project)

})

})

[/crayon]

And that’s it! Your shiny new Kubernetes app has been deployed to your cluster automatically.

What’s next?

To summarize, we’ve created a basic pipeline, taking an application from code to running on a Kubernetes cluster. Although, for true CI/CD you may want to take it a few steps further.

For example, you’ll probably want to visualize the status of your Brigade pipelines. Brigade has a separate service called Kashti doing just that. Keep in mind though, that it’s in early alpha, so it may not be ready for anything beyond experimentation yet.

And how about adding a Slack webhook URL to your project secrets and sending a notification on success or failure, as you can see at the bottom of my example here. Take a look at Brigade’s docs for other integrations, such as triggering a job on an image push to a Dockerhub repo.

Finally, since we’ve only got unit tests, how about adding a stage for integration and end-to-end tests which triggers after a deployment is done? For that matter, you may want to create multiple Kubernetes namespaces, as well as multiple chained deployment jobs. With these separated namespaces, a deployment meant for testing is done to a “test” namespace before deploying to your production namespace.

I’d be interested in seeing what you come up with, so don’t forget to share!