Set up continuous application build and delivery from Git to Kubernetes with Oracle Wercker

It is nice – to push code to a branch in a Git repository and after a little while find the freshly built application up and running in the live environment. That is exactly what Wercker can do for me.

The Oracle + Wercker Cloud service allows me to define applications based on Git repositories. For each application, one or more workflows can be defined composed out of one or more pipelines (steps). A workflow can be triggered by a commit on a specific branch in the Git repository. A pipeline can do various things – including: build a Docker container from the sources as runtime for the application, push the Docker container to a container registry and deploy containers from this container registry to a Kubernetes cluster.

In this article, I will show the steps I went through to set up the end to end workflow for a Node JS application that I had developed and tested locally and then pushed to a repository on GitHub. This end to end workflow is triggered by any commit to the master branch. It builds the application runtime container, stores it and deploys it to a Kubernetes Cluster running on Oracle Cloud Infrastructure (the Container Engine Cloud).

1. Add an Application to my Wercker account

2. Step through the Application Wizard:

Since I am logged in into Wercker using my GitHub account details, I get presented a list of all my repositories. I select the one that holds the code for the application I am adding:

Accept checking out the code without SSH key:

Step 4 presents the configuration information for the application. Press Create to complete the definition of the application.

The successful creation of the application is indicated.

3. Define the build steps in a wercker.yml

The build steps that Wercker executes are described by a wercker.yml file. This file is expected in the root of the source repository.

Wercker offers help with the creation of the build file. For a specific languagem it can generate the skeleton wercker.yml file that already refers to the base box (a language specific runtime) and has the outline for the steps to build and push a container.

In my case, I have created the wercker.yml file manually and already included it in my source repo.

Here is part of that file.

Based on the box node8 (the base container image), it defines three building block: build, push-to-releases and deploy-to-oke. The first one is standard for Node applications and builds the application (well, it gathers all node modules). The second one takes the resulting container image from the first step and pushes it to the Wercker Container Registry with a tag composed from the branch name and the git commit id. The third one is a little more elaborate. It takes the container image from the Wercker registry and creates a Kubernetes deployment that is subsequently pushed to the Kubernetes cluster that is indicated by the environment variables KUBERNETES_MASTER and KUBERNETES_TOKEN.

4. Define Pipelines and Workflow

In the Wercker console, I can define workflows for my application. These workflows consist of pipelines, organized in a specific sequence. Each pipeline is triggered by the completion of the previous one. The first pipeline is typically triggered by a commit event in the source repository.

Before I can compose the workflow I need, I first have to set up the Pipelines – corresponding to the build steps in the wercker.yml file in the application source repo. Click on Add new pipline.

Define the name for the new pipeline (anything you like) and the name of the YML Pipeline – this one has to correspond exactly with the name of the building block in the wercker.yml file.

Click on Create.

Next, create a pipeline for the ”deploy-to-oke” step in the YML file

Press Create to also create this pipeline.

With all three pipelines available, we can complete the workflow.

Click on the plus icon to add step in the workflow. Associate this step with the pipeline push-docker-image-to-releases:

Next, add a step for the final pipeline:

This completes the workflow. If you now commit code to the master branch of the GitHub repo, the workflow will be triggered and will start to execute. The execution will fail however: the wercker.yml file contains various references to variables that need to be defined for the application (or the workflow or even the individual pipeline) before the workflow can be successful.

Crucial in making the deployment to Kubernetes successful are the files kubernetes-deployment.yml.template and ingress.yml.template. These files are used as template for the Kubernetes deployment and ingress definitions that are applied to Kubernetes. These files define important details such as:

Container Image in the Wercker Container Registry to create the Pod for

Port(s) to be exposed from each Pod

Environment variables to be published inside the Pod

URL path at which the application’s endpoints are accessed (in ingress.yml.template)

5. Define environment variables

Click on the Environment tab. Set values for all the variables used in the wercker.yml file. Some of these define the Kubernetes environment to which deployment should take place, others provide values that are injected into the Kubernetes Pod and made available as environment variables to the application at run time

6. Trigger a build of the application

At this point, the application is truly ready to be built and deployed. One way to trigger this, is by committing something to the master branch. Another option is shown here:

The build is triggered. The output from each step is available in the console:

When the build is done, the console reflects the result.

Each pipeline can be clicked to inspect details for all individual steps, for example the deployment to Kubernetes:

Each step can be expanded for even more details:

In these details, we can find the values that have been injected for the environment variables.

7. Access the live application

This final step is not specific to Wercker. It is however the icing on the cake – to make actual use of the application.

The ingress definition for the application specifies:

This means that the application can be accessed at the endpoint for the K8S ingress at the path /eventmonitor-ms/app/.

Given the external IP address for the ingress service, I can now access the application:

Note: /health is one of the operations supported by the application.

8. Change the application and Roll out the Change – the ultimate proof

The real proof of this pipeline is in changing the application and having that change rolled out as a result of the Git commit.