How To Integrate Terraform Enterprise with Your Jenkins CI/CD Pipelines

I work at a consulting company that does a lot of DevOps and CI/CD projects. One task that commonly comes up in CI/CD pipelines is spinning up test machines to run integration and code deployment tests. These test machines can be on-prem VMs in a VMware environment and, more commonly, cloud instances in AWS/Azure/GCP. While we do have a lot of customers using platform-specific Infrastructure-as-Code (IaC) tools like AWS CloudFormation and Azure’s ARM Templates to spin up these test machines in their pipelines, Terraform has been gaining a ton of momentum as the cross-platform IaC tool of choice. Folks love that it not only provides a powerful, common language for deploying to all their on-prem and cloud environments, but also how simple it is to run the “terraform” command from within a CI/CD pipeline.

Last year we started seeing a lot of our customers that use open source Terraform making the move to HashiCorp’s commercial product, Terraform Enterprise. Terraform Enterprise provides centralized services to enable DevOps teams to collaborate on their use of Terraform, like shared run environments, centralized management of Terraform state, secure variable storage, team management, and centralized audit logging.

The focus of this blog article is to demonstrate how to switch over your CI/CD pipelines from open source Terraform to a central Terraform Enterprise instance. It assumes that you’re already familiar with writing basic Terraform code. Since Jenkins is my primary CI/CD tool, my examples will also include some Jenkins code, but the basic ideas here are applicable to any CI/CD tool.

Setup Terraform Enterprise for CI/CD

Here are the high-level setup steps we’ll be performing in Terraform Enterprise:

Create a Workspace

Set the Workspace to Auto-apply

Set our cloud credentials as Workspace variables (AWS in this example)

Create a code repo to store the Workspace’s Terraform code

Seed the code repo with a remote-backend.tf file

Generate a user API token

First off, we’ll need to choose or create a Workspace in Terraform Enterprise, in which to spin up/down our pipeline’s test machines. Each Workspace in Terraform Enterprise maintains a separate Terraform State. While you’re developing your first pipeline with Terraform Enterprise, you should definitely use a fresh, new Workspace, so you don’t break other peoples’ stuff. Longer term, your pipeline(s) will probably share a Workspace that is managing the Terraform State for a group of applications or an environment. Ideally, it’s best if you can use a Workspace that is dedicated to temporary infrastructure (like our CI/CD test machines) – this simplifies IaC hygiene, as you can easily spot/purge any orphan machines (say anything older than a day) that might have gotten missed in cleanup.

Anyway, the only real requirement for our initial Workspace is that it will need to be set to “Auto-apply” so that our pipeline can run completely automated. If we don’t set it to “Auto-apply,” Runs (not necessarily initiated by our CI/CD pipeline) can stack up in the “Pending” state, and our pipeline we won’t be able to Apply its Run until we’ve manually cancelled or approved the “Pending” Runs. Here I’ve created an example Workspace for this article called “ws-aws-ex1.”

Also, for this example, I’m not going to be setting up webhooks in GitHub (I’ll touch on that in my next article), so be sure to leave the VCS field in the Workspace set to None.

If this is a new Workspace, don’t forget to setup any Variables you need, such as credentials for connecting to your cloud. In my Workspace, below, I added connection variables for Terraform’s AWS provider.

Next, we’ll need a code repo to store all the Terraform code for our Workspace. I created a repo in my local GitHub Enterprise server called “ws-aws-ex1” to match the name of my Workspace. I generally recommend not putting your Jenkinsfile within the Workspace’s repo. This is because there usually isn’t a 1:1 relationship between pipelines in workspaces. It’s pretty common to have a pipeline that needs to spin up resources in multiple workspaces, and vice versa.

Also be sure to add a .gitignore file to your repo, to ignore all of Terraform’s hidden files and directories. You can easily do this on GitHub’s repo creation screen, as Terraform is one of the built-in .gitignore types.

Now that our Workspace’s code repo is created, we’ll seed it with a configuration file used by the terraform executable. Keep in mind that the terraform executable is one of multiple options for connecting to Terraform Enterprise to kick off a Workspace run – I’ll detail the pro’s and con’s of various options in the next blog article. For this article, we’ll use the terraform executable since that is what all Terraform users are familiar with.

Go ahead and add the following configuration file to your Workspace’s GitHub repo. Fill in the appropriate values for hostname, organization, and Workspace name. Name it remote-backend.tf.

And for our last setup step, we’ll need to generate a user API token for our pipeline to authenticate to Terraform Enterprise. This should definitely be a user token (preferably from a service account), and not a team token. While many functions and API calls will appear to work just fine with a team token, as of the time of this article, some will not. For example, I was working with a customer just last week who was using a team token to submit Workspace runs via the API. While the runs would “plan” just fine, the Auto-apply setting (above) was being ignored and their pipelines were stuck waiting for manual intervention – this turned out to be a quirk of using a team token.

To generate a user token, log into the UI with the service account you want to use, click the User icon at the top-right corner, and choose User Settings. From the User Settings menu on the left, click Tokens.

Pipeline Steps

Now that we’ve setup Terraform Enterprise with all the prerequisites that our pipeline will need, let’s go ahead and start writing our pipeline. If you’re feeling impatient, skip to the end of the article for the link to download the example Jenkinsfile.

Here are the high-level steps that our pipeline will need in order to spin up machines via Terraform Enterprise.

Add new Terraform code to the code repo

Run the workspace (i.e. “terraform plan” and “terraform apply”)

Do whatever integration or deployment testing you need to do

Cleanup (destroy) the test machines

Let’s talk through each step…

1. Add new terraform code to the Workspace

This stage is where we will define the machines that we’ll be spinning up. First, well need to clone down all the existing code from the Workspace’s code repo:

#Clear old clone directory - make sure no old Terraform code
# or config files are hanging around!
rm -rf <directory to which the repo will clone> || true
#Pull down current Workspace code
git clone <url to my GitHub repo>

Then we’ll create our new Terraform code file. For this example, I’m using a very simple bit of code to spin up an EC2 instance in AWS. This is just a rudimentary example – for a real pipeline, this Terraform code file would typically reference a reusable module.

The “${tfCodeId}” bit is a Jenkins Groovy variable from our pipeline that we’re injecting to make sure Terraform has a unique resource name each time the pipeline runs. The same concept would apply if we were calling a reusable module.

Finally, to wrap up this stage, we’ll commit and push our code changes back up to the GitHub repo.

2. Run the Workspace (i.e. “terraform plan” and “terraform apply”)

As I mentioned previously, there are actually multiple methods to connect to Terraform Enterprise and kick off a Workspace run. Since all terraform users are familiar with the “terraform” executable, I went ahead and used it for this initial article. That said, there are definite pro’s and con’s to each method, so I’ll be sure to detail those in my next article.

terraform init -backend-config="token=$TOKEN"
terraform apply

Side note: As I was testing the example code for this article, I ran into a problem with the Credential that I created in Jenkins to store my user API token. I originally had it stored as a “Secure text” Credential, but Jenkins must not have liked one of the characters in the token, as it seemed to corrupt it. This is documented in the full Jenkinsfile, which you can download at the end of the article.

3. Do whatever integration or deployment testing you need to do

This step is self-explanatory. Now is the time for your pipeline to do whatever testing you were planning to do with these test machines. If you look at our Workspace’s code repo during this step, you’ll see the new Terraform file, which defines the test machines.

We’ll also see our test machine in Terraform Enterprise, if we take a look at the Workspace’s State.

4. Cleanup (destroy) the test machines

To destroy the instances, we have only to delete the Terraform code file that we created in Step 1, and then re-run the Workspace. This means that the pipeline will need to keep track of the filename it creates in Step 1, so that it knows what to delete in Step 2.

Normally, this is the type of cleanup code that you’d want to make sure always runs, even if one of the previous stages fails. However, to keep this example code simple, I just threw it into a final Stage.

Anyway, after Stage 4 runs, we should see in our code repo (example-5.tf, in the previous repo screenshot) and Workspace State that the test machine has been removed.

Now Get the Example Jenkinsfile!

You can download the complete example Jenkinsfile that I wrote for this article here: