software at the end of the universe

Main menu

Post navigation

The Jenkinsfile-Runner-Fn project is a Project Fn (a container native, cloud agnostic serverless platform) function to run Jenkins pipelines. It will process a GitHub webhook, git clone the repository and execute the Jenkinsfile in that git repository. It allows scalability and pay per use with zero cost if not used.

This function allows Jenkinsfile execution without needing a persistent Jenkins master running in the same way as Jenkins X Serverless, but using the Fn Project platform (and supported providers like Oracle Functions) instead of Kubernetes.

Project Fn vs AWS Lambda

The function is very similar to the one in jenkinsfile-runner-lambda with just a small change in the signature. The main difference between Lambda and Fn is in the packaging, as Lambda layers are limited in size and are expanded in /optwhile Fn allows a custom Dockerfile where you can install whatever you want in a much easier way, just need to include the function code and entrypoint from fnproject/fn-java-fdk.

Oracle Functions

Oracle Functions is a cloud service providing Project Fn function execution (currently in limited availability). jenkinsfile-runner-fn function runs in Oracle Functions, with the caveat that it needs a syslog server running somewhere to get the logs (see below).

Like this:

The Jenkinsfile-Runner-Lambda project is a AWS Lambda function to run Jenkins pipelines. It will process a GitHub webhook, git clone the repository and execute the Jenkinsfile in that git repository. It allows huge scalability with 1000+ concurrent builds and pay per use with zero cost if not used.

This function allows Jenkinsfile execution without needing a persistent Jenkins master running in the same way as Jenkins X Serverless, but using AWS Lambda instead of Kubernetes. All the logs are stored in AWS CloudWatch and are easily accessible.

Why???

Why not?

I mean, it could make sense to run Jenkinsfiles in Lambda when you are building AWS related stuff, like creating an artifact and uploading it to S3.

Jenkinsfile must add /usr/local/bin to PATH and use /tmp for any tool that needs writing files, see the example

Extending

Three lambda layers are created:

jenkinsfile-runner: the main library

plugins: minimal set of plugins to build a Jenkinsfile

tools: git, openjdk, maven

You can add your plugins in a new layer as a zip file inside a plugins dir to be expanded in /opt/plugins. You could also add the Configuration as Code plugin and configure the Artifact Manager S3 to store all your artifacts in S3.

Other tools can be added as new layers, and they will be expanded in /opt. You can find a list of scripts for inspiration in the lambci project (gcc,go,java,php,python,ruby,rust) and bash, git and zip (git is already included in the tools layer here)

The layers are built with Docker, installing jenkinsfile-runner, tools and plugins under /opt which is where Lambda layers are expanded. These files are then zipped for upload to Lambda.

Installation

Create a lambda function jenkinsfile-runner using Java 8 runtime. Use the layers built in target/layer-* and target/jenkinsfile-runner-lambda-*.jar as function. Could use make publish to create them.

Exposing the Lambda Function

From the lambda function configuration page add a API Gateway trigger. Select Create a new API and choose the security level. Save the function and you will get a http API endpoint.

Note that to achieve asynchronous execution (GitHub webhooks execution will time out if your webhook takes too long) you would need to configure API Gateway to send the payload to SNS and then lambda to listen to SNS events. See an example.

Shipper enables blue-green and multi cluster deployments for the Helm charts built by Jenkins X, but has limitations on what are the contents of the chart. You could do blue-green between staging and production environments.

Istio allows to send a percentage of the traffic to staging or preview environments by just creating a VirtualService.

Flagger builds on top of Istio and adds canary deployment, with automated roll out and roll back based on metrics. Jenkins X promotions to the production environment can automatically be canary-enabled for a graceful roll out by creating a Canary object.

Shipper

Because Shipper has multiple limitations on the Helm charts created I had to make some changes to the app. Also Jenkins X only builds the Helm package from master so we can’t do rollouts of PRs, only the master branch.

App rollout failed with the Jenkins X generated charts due to a generated templates/release.yaml, probably a conflict with jenkins.io/releases CRD.

Chart croc-hunter-jenkinsx-0.0.58 failed to render:
could not decode manifest: no kind "Release" is registered for version "jenkins.io/v1"

We just need to change jx step changelog to jx step changelog –generate-yaml=false so the file is not generated.

In multi cluster, it needs to use public urls for both chartmuseum and docker registry in the shipper application yaml so the other clusters can find the management cluster services to download the charts.

Istio

We can create this Virtual Service to send 1% of the traffic to a Jenkins X preview environment (for PR number 35), for all requests coming to the Ingress Gateway for host croc-hunter.istio.example.org

Flagger

We can create a Canary object for the chart deployed by Jenkins X in the jx-production namespace, and all new Jenkins X promotions to jx-production will automatically be rolled out 10% at a time and automatically rolled back if anything fails.

Like this:

Progressive Delivery is the next step after Continuous Delivery, where new versions are deployed to a subset of users and are evaluated in terms of correctness and performance before rolling them to the totality of the users and rolled back if not matching some key metrics.

There are some interesting projects that make this easier in Kubernetes, and I’m going to talk about three of them that I took for a spin with a Jenkins X example project: Shipper, Istio and Flagger.

Shipper

Shipper is a project from booking.com extending Kubernetes to add sophisticated rollout strategies and multi-cluster orchestration (docs). It supports deployments from one to multiple clusters, and allows multi-region deployments.

Shipper uses Helm packages for deployment but they are not installed with Helm, they won’t show in helm list. Also, deployments must be version apps/v1 or shipper will not edit the deployment to add the right labels and replica count.

Rollouts with Shipper are all about transitioning from an old Release, the incumbent, to a new Release, the contender. This is achieved by creating a new Application object that defines the n stages that the deployment goes through. For example for a 3 step process:

Staging: Deploy the new version to one pod, with no traffic

50/50: Deploy the new version to 50% of the pods and 50% of the traffic

If a step in the release does not send traffic to the pods they can be accessed with kubectl port-forward, ie. kubectl port-forward mypod 8080:8080, which is useful for testing before users can see the new version.

Shipper supports the concept of multiple clusters, but treats all clusters the same way, only using regions and filter by capabilities (set in the cluster object), so there’s no option to have dev, staging, prod clusters with just one Application object. But we could have two application objects

myapp-staging deploys to region “staging”

myapp deploys to other regions

In GKE you can easily configure a multi cluster ingress that will expose the service running in multiple clusters and serve from the cluster closest to your location.

Limitations

Chart restrictions: The Chart must have exactly one Deployment object. The name of the Deployment should be templated with {{.Release.Name}}. The Deployment object should have apiVersion: apps/v1.

Pod-based traffic shifting: there is no way to have fine grained traffic routing, ie. send 1% of the traffic to the new version, it is based on the number of pods running.

New Pods don’t get traffic if Shipper is not working

Istio

Istio is not a deployment tool but a service mesh. However it is interesting as it has become very popular and allows traffic management, for example sending a percentage of the traffic to a different service and other advanced networking.

With Istio we can create a Gateway that processes all external traffic through the Ingress Gateway and create VirtualServices that manage the routing to our services. In order to do that just find the ingress gateway ip address and configure a wildcard DNS for it. Then create the Gateway that will route all external traffic through the Ingress Gateway

Istio does not manage the app lifecycle just the networking. We can create a Virtual Service to send 1% of the traffic to the service deployed in a pull request or in the master branch, for all requests coming to the Ingress Gateway.

Flagger

Flagger is a project sponsored by WeaveWorks using Istio to automate canarying and rollbacks using metrics from Prometheus. It goes beyond what Istio provides to automate progressive rollouts and rollbacks based on metrics.

The deployment rollout is defined by a Canary object that will generate primary and canary Deployment objects. When the Deployment is edited, for instance to use a new image version, the Flagger controller will shift the loads from 0% to 50% with 10% increases every minute, then it will shift to the new deployment or rollback if metrics such as response errors and request duration fail.

Comparison

This table summarizes the strengths and weaknesses of both Shipper and Flagger in terms of a few Progressive Delivery features.

Jenkins X is a new open source CI/CD platform for Kubernetes based on Jenkins.
Jenkins X runs on Kubernetes and transparently uses on demand containers to run build agents and jobs, and isolate job execution. It enables CI/CD-as-code using Jenkins Pipelines and automated deployments of commits and pull requests using Skaffold, Helm and other popular tools. We will demo how to use Jenkins X on any Kubernetes cluster for fully automated CI and CD using a GitOps approach.

Building and testing is a great use case for containers, both due to the dynamic and isolation aspects. We will share our experience running Jenkins at scale using Kubernetes

Jenkins is an example of an application that can take advantage of Kubernetes technology to run Continuous Integration and Continuous Delivery workloads. Jenkins and Kubernetes can be integrated to transparently use on demand containers to run build agents and jobs, and isolate job execution. It also supports CI/CD-as-code using Jenkins Pipelines and automated deployments to Kubernetes clusters. The presentation and demos will allow a better understanding of how to use Jenkins on Kubernetes for container based, totally dynamic, large scale CI and CD.