CI/CD Pipelines as Code

If you have full control over each aspect of your application’s lifecycle, it’s reasonable to assume you’d want that lifecycle to be as streamlined as possible through automated code merges, unit/regression testing, environment builds, and so on. This is achievable through a Continuous Integration and Delivery (CI/CD) pipeline, which is an automated way to orchestrate a series of steps for merging, building, testing and deploying application code. This delivery pipeline has been the tried and true method of achieving this level of efficiency, which leaves you and your team with the time to focus on what matters: your application.

Why represent my pipeline as code?

Application delivery should be considered just as critical as the application that’s being delivered. The ability to seamlessly and safely deliver new application features and updates is key to running a modern, customer responsive application stack. Leveraging CI/CD principles within your development lifecycle is now widely expected at the application, platform, and infrastructure layers. In order to leverage these CI/CD principles, each layer must be represented in textual form and version controlled. This is known as Everything as Code. It’s a well-established pattern for codifying infrastructure building blocks for your application and can be treated as an extension to your application code base.

Most components of an application stack are already automated and enhanced through being represented as code. Why not leverage the same benefits for CI/CD delivery pipelines? By doing so, all of the typical benefits of traditional software development paradigms (e.g. version control, code review, branching strategies) can also be leveraged for the pipeline itself. Putting your pipeline in a code repository and having it created and run automatically by your CI tool is much preferred to having to manually navigate through a confusing UI.

How Jenkins helps with delivery pipelines

While there are many CI/CD tools that can leverage Pipelines as Code (GitLab, Bitbucket, Drone) we’ll focus on Jenkins due to its open source and community-driven nature. There have historically been several varied approaches to craft code-driven jobs into “pipelines” with Jenkins, such as Job DSL Plugin, JJB, and Build Flow Plugin. However, the release of Jenkins 2.0, featuring the newly revamped Pipelines plugin, is recommended as the best path forward for doing Pipelines as Code (and CI jobs in general).

Jenkins Pipelines are represented in a Groovy DSL in the form of a Jenkinsfile within your application or infrastructure code repository. Using a dynamic, feature-rich language such as Groovy enables almost limitless automation capabilities within your pipeline. Not familiar with Groovy? Don’t worry, getting started is very intuitive and the Jenkins folks have even included a Snippet Generator right within your Jenkins installation for reference.

An example Jenkinsfile use-case

At Datapipe, we commonly focus on Infrastructure as Code deployments for our customer’s applications, so our example use-case will be a simple AWS deployment using Terraform, with the code stored on GitHub.

You can view this Jenkinsfile + a simple Terraform stack over at this GitHub repo.

Note how each stage of the pipeline is defined with a block of tasks for that particular step in the pipeline process. The Jenkinsfile pipeline merely orchestrates your CI/CD workflow in a concise, codified format.

Once Jenkins has been configured to scan your team’s GitHub Organization, any subsequently created repos containing a Jenkinsfile will be automatically loaded by Jenkins as a new pipeline job. This is one of the most powerful features of the Pipeline plugin – Multibranch Pipelines. Alternatively, you can also “import” your Jenkinsfile to a new, manually created pipeline job, either via pointing to an individual SCM repo or copy/pasting the Groovy itself.

Here is what the Pipeline looks like after it has run – note each of the defined stages listed above:

While the Terraform example pipeline shown is very rudimentary, some various enhancements that should definitely be considered are:

Parallelization of asynchronous tasks.

Build parameters for handling things like multiple environments.

Groovy methods to keep your pipeline DRY.

Secrets management via a call out to something like the Credentials Binding plugin.

Archiving / storing artifacts for other pipeline jobs.

Error handling with try/catch/finally.

Once your pipeline is in place, you can focus more time and energy on delivering the best application possible. Interested in learning more? Read additional blog posts about application development here.

Have something you’d like to share with us? Drop us a line on Twitter or Facebook