Introduction

This tutorial takes a simple
microservice architecture
and explains how to setup a concourse pipeline in order to test and deploy single microservices independently without affecting the overall microservice system.
Cloud Foundry
will be used as a platform to which the microservices are deployed to.

The goal of the concourse pipeline - which is build during this tutorial - is to automatically trigger and execute the following steps whenever a developer pushes a change to a git repository:

The pipeline pulls the code base of the changed microservice.

Run unit tests

In case the app/microservice is written in a programming language which requires a compilation of the source code, the pipeline will compile the source code.

Next, the pipeline will deploy the change automatically to the test environment. At the time before the pipeline deployed the change to the test environment both existing environments test and production consisted of the same microservice versions. So that we can ensure the deploy will also work in the production environment.

Run smoke tests (kind of integration tests) to test whether the change don't break other microservices.

Once the smoke tests in the test environment succeeded (or failed) the pipeline should send out a message via email (Slack can also be used instead of email).

The deployment into the production system must be triggered manually via the concourse.ci web interface but is then deployed to the production system automatically.

The steps above describe a common pattern on how to build a continuous delivery pipeline but you should discuss within your team which steps are required for each specific project. For example, there could be more then only one test environment a change must pass before it's deployed to production.

I've prepared a simple microservice architecture that can be used during this tutorial. It's an architecture consisting of two services. A customer service written in Java and an order service written in node.js. Links to the repositories are provided later.

Note that I'm using "concourse" and "concourse.ci" as synonyms.

Prerequisites

To follow this tutorial a few things are required:

A GitHub account because you have to fork the repositories in order to push changes.

A Cloud Foundry account with the Space Developer role in two application spaces. One space will be used to deploy the testing environment and the other one for the the production environment. You can use public Cloud Foundry targets like
anynines
or
run.pivotal.io.
Alternatively you can
deploy your own Cloud Foundry
but this requires some more things to be in place, so I recommend to use a public provider. Some of them have free/trial plans.

You should be able to create a PostgreSQL service instance in each Cloud Foundry space. MySQL should also work but has not been tested. If this is not possible there is also an alternative described that stores data to the ephemeral disk of the application containers. This solution can be used to follow the tutorial but it's not usable in real live.

A concourse.ci server. For testing reasons, you can setup a concourse.ci server on your local machine but it's not intended to be a durable solution because the pipeline then doesn't deploy anything when you shutdown your laptop. The installation of concourse on your local machine
requires vagrant and virtualbox
and will be explaind in the next step of this tutorial. Alternatively to setting up concourse with vagrant you can also use
docker compose to deploy a concourse server.

A JFrog Artifactory
server. For testing purposes we are setting up an
Artifactory
server on the local machine in the relevant section of this tutorial.

Per default vagrant up checks whether there is a new version of the concourse/lite box. If this is the case and the new available version has a suffix like "-rc.33" you'll see this error.

$
vagrant up
Bringing machine 'default' up with 'virtualbox' provider...==> default: Importing base box 'concourse/lite'...==> default: Matching MAC address for NAT networking...==> default: Checking if box 'concourse/lite' is up to date...A version of the box you're loading is formatted in a way that
Vagrant cannot parse: '2.5.1-rc.33'. Please reformat the version
to be properly formatted. It should be in the format of
X.Y.Z.

In this case open the Vagrant file and ensure the following line is present in that file and it is not a comment:

config.vm.box_check_update = false

When everything works you should see vagrant up prints the following output:

Authentication is disabled in the local setup. It's not required to provide a username and password. You should be immediately logged in and see the following screen:

Another Hint: Increase Memory settings for the Vagrant VM

On my Mac book I had the issue that everything gets really slow when testing and building the java application (see section: " Create concourse job to build the UAA and run the unit tests"). For this purpose, I increased the memory for the Vagrant VM to 4GB. You can do this by adding the following line to the Vagrant file:

Install the fly CLI and login using the fly CLI

The concourse web interface is only used for displaying the state of the pipelines and for triggering pipelines manually. All other tasks are performed via the fly CLI. For example, creating new pipelines, deleting pipelines etc. are things that are performed via CLI.

Mind that the download link of the CLI differs for each operation system.

In order to communicate with the concourse.ci server via CLI it's required to let the CLI know where the concourse server runs:

$
fly --target local login --concourse-url http://192.168.100.4:8080

The concourse server and the fly CLI should have the same version. You'll see the following warning after executing fly login if the concourse server version and fly version differ:

WARNING:

fly version (2.5.1) is out of sync with the target (2.5.0). to sync up, run the following:

fly -t local sync

To get rid of this warning just execute fly -t local sync and the fly CLI will upgrade/downgrade itself to the same version as the concourse server version.

Learn the basic concourse.ci concepts and setup a hello world pipeline

To verify that our concourse setup is working correctly let's create a simple pipeline. But first let's have a look what "pipeline as code" means.

Pipeline as Code

Concourse realizes the concept of "pipeline as code"

Concourse.ci is build around the concept of "pipeline as code". This means we don't click in the web interface to create and configure pipelines instead they get described in a
YAML
file (or as code to be more general).

Once a pipeline is described you can upload the yaml file to the concourse server.

The pipeline as code concept has the following benefits:

You can use a source code management system like git to manage pipelines. So it's easy to collaborate and there is a versioning that allows to trace changes.

Thanks to the source code management system, it's possible to rollback changes on the pipelines very easy using tools you already know, git for example.

It's possible to setup the same pipeline on different concourse.ci servers by just changing the target of the fly CLI.

In concourse.ci it's possible to extract so called "tasks" into separate yaml files which then can be reused within different pipelines.

Where to put the pipeline code?

Best Practice!!!

Pipeline code should be stored in the same repository as the application that is deployed by the pipeline.

The best practice to store the yaml files containing the specification of the pipelines is to put them into the same code repository as the source code of the application/microservice.

Since we use
one code repository for each microservice
the pipeline will also be distributed across different code repositories. In fact - in this tutorial - we'll have multiple concourse pipelines with some shared artefacts.

Each microservice code repository will contain a "ci" directory within the root directory. Inside the ci directory we'll have another directory called "pipelines" where the yaml files are stored which describe the pipelines.

In the end it doesn't matter for concourse where the YAML files are stored but structure described above is often used. Since concourse itself uses concourse for continuous integration you can have a look at the
concourse github project
for a reference.

First simple pipeline without deploying anything

A concourse pipeline is not required to deploy something. Abstractly spoken a concourse.ci provides mechanisms to observe resources and to orchestrate the execution of bash scripts. Often the execution of the scripts is triggered when concourse observers a change of a specified resource.

Later we'll define GitHub repositories as resources but for the hello world example we don't use a resource at all.

Hint: Don't skip this step because it's a good proof that the concourse installation works correctly. That's what Hello Worlds are good for.

Let's create a very simple pipeline that executes a bash script. The bash script prints hello world to the stdout which we will then see in the browser interface of concourse.

In your terminal: change into an empty working directory and create the "ci" directory within this working directory. Furthermore create the file "hello-world.yml" within the ci directory.

If you're not familiar with YAML the pipeline specification might be a bit hard to read at the first glance. YAML is a superset of JSON with some syntactical extensions. In other words: every valid JSON is a valid YAML but not the other way around.

Probably you are familiar with JSON. To give you a change to get an idea what the dashes and whitespaces mean in the YAML specification the same pipeline is specified in JSON below:

Bad Practice!!!

Although it is possible to upload JSON to concourse I don't recommend to specify concourse pipelines in JSON. Pipeline specifications tend to be long and YAML is much readable/maintainable for this purpose.

Before diving into theory and exploring what the statements in the pipeline definition mean let's first get it running. To do so it's required to upload the the pipeline specification to the concourse server. For this purpose, execute the following command:

The arguments of this commands are explained in the next section "The Hello Wold pipeline explained"

If you want to update the pipeline definition on the concourse server you would be edit the ci/hello-world.yml file and execute the same command again.

Every time you change something on the pipeline specification the set-pipeline subcommand will print all differences between the new pipeline specification and the pipeline specification which is currently on the concourse server.

In this example it's the initial upload and the command will print the whole pipeline specification as "added" content. Confirm the changes by typing "y" and enter.

Every time a pipeline specification is uploaded with the set-pipeline subcommand the changes must be confirmed with "y".

So far the pipeline will not be executed by concourse because it's paused. To unpause the pipeline, click to the left most icon in the blue bar. A side navigation appears which lists all pipelines uploaded to this concourse server. Next to the name of the "hello-world" pipeline there is a play symbol you have to click in order to unpause the pipeline.

The term "build" is also explained in the next section "Concourse.ci core concepts"

Once the pipeline is unpaused you can start a so called "build" which will then execute the pipeline. The following animation shows how to unpause the pipeline and how to start a build afterwards.

In the background concourse will download a Docker image so it could take a few minutes until the pipeline execution is finished. As long as a pipeline is running it's displayed in yellow color. When the last pipeline execution has successfully finished its displayed in green color.

Once the pipeline is finished you can click to "hello_world_task" to see the stdout of the hello word task in the pipeline. It should look like this:

The next section explains the core concept of concourse.ci and the following section then explains the hello world pipeline in detail.

Concourse.ci Concepts

After the celebration of the successfully configured hello world pipeline this sections explains what we actually did.

The concourse.ci domain model illustrated in UML.

First let's have at the basic concourse.ci concepts used in the last section.

Note that the illustration is intended to show concourse.ci concepts not the concourse.ci architecture. Further it is simplified, in reality it is more abstract.

The above image illustrates the concourse.ci domain model and shows how the concepts relate to each other. All concepts which are relevant for the hello world pipeline are highlighted in blue color.

Pipelines

Pipelines are the central concept in concourse.ci. Because continuously integrating, delivering and deploying software is the primary use case of concourse.ci we can think of a pipeline as a description of how software changes flow into the staging/production system.

In other words, a pipeline describes the stages (quality gates) a change must pass before it gets released. Although there might be other concourse.ci use cases, this definition of a pipeline is a good mind model to start building the first pipeline.

For this tutorial the stages are:

Running unit tests

Deploy to the test environment

Execute smoke tests

Deploy to the production environment

These four stages will be realized as concourse.ci jobs (see below).

The list oft the supported resources can be extended but for this tutorial we are good with the existing resource types.
The logic how to find out if a resource changed, to pull and push a specific resource type is encapsulated in a docker image. Hence everyone can provide a new concourse resource type by simply putting three executable scripts into a docker image. See the concourse.ci/ implementing-resources.html for a reference or git-resource for an example.

Resources

We did not use resources yet in the hello world example but they are crucial to explain what pipelines are.

Resources are intended to flow through pipelines whenever they change. A resources can be defined as a input of one or multiple jobs.

In concourse.ci there are predefined resource types.

Resources are the inputs of jobs.

Later in this tutorial we'll define two git resources, one for each microservice. Each of these two git resources will point to a git repository (hosted on GitHub in our example). Concourse knows how to observe a git repository in order to detect whether it changed. This means concourse will poll the git repository periodically and checks whether there are new commits.

Once concourse detected a change on a resource it will pull the content of the resource and pass it to the jobs specified in the pipeline definition. For the git resource type for example, concourse will clone the git repository and then it will pass all cloned files/directories to the jobs declared to receive the resource.

Besides observing a resource and pulling its content, concourse also provides to possibility to update a resource. This means that we can create new versions of a resource during the delivery process.

An example where we'll need to create a new version of a resource is that our pipeline builds an executable jar file out of the java sources in the git repository. The Jar file is represented as a concourse resource which is updated by the concourse pipeline. The resource-type that is used to manage jar files is the
artifactory resource type.

Jobs

Jobs describe the actual work a pipeline does. A Job consists of a build plan. A build plan itself consists of multiple steps. The steps in a build plan can be arranged to run in parallel or one after the other (a combination of parallel and sequentially is also possible).

There are three different types of steps that are explained next.

Fetch Resource Steps (via "get" keyword)

This kind of step tells concourse which resources are required to execute the build plan. Concourse will then provide the latest version of the specified resources in the tasks steps.

Tasks Steps

A tasks consists of shell command (or a shell script) and a name of a docker image in which the bash script is executed. Tasks are the workhorses in concourse.ci.

In our example we'll specify how to run the test suite and compile java code in tasks. A task in concourse can have multiple inputs and multiple outputs (later we will se how we use tasks with inputs and outputs).

Update Resource Step (via "put" keyword)

When a put step is specified in a build plan, a build will update a resource. In contrast to outputs of tasks the put step will update the resource persistently whereas an output of a task only lives during the execution of a build plan. Outputs of tasks are intended to pass data from one task of a build plan to another task of the same build plan.

Before we put the theory by site let's have a look at the yaml specification of the hello world pipeline and see whether we recognise some of the concepts we just learned (this is done in the "The Hello Wold pipeline explained" section).

After this we'll extend the hello world example to use a resource ("Hello World with Resources" section).

Last but not least we'll deploy the microservices in the following sections.

The Hello Wold pipeline explained

The following pipeline specification is the same we used in the "First simple pipeline without deploying anything" section. In this section we'll match the different parts of the yaml to the concepts introduced in the last section. The image below annotates the pipeline specification with the concept represented in the specific lines.

The same hello world pipeline as in the "First simple pipeline without deploying anything" section.

We see that the hello world pipeline has exactly one job with exactly one task in its build plan.

Nested within the definition of the task you find the specification of the platform the task should run on. To execute a task a concourse.ci setup consists of one or multiple "workers". These are virtual machines on which concourse executes task. You could setup a concourse installation with workers that provide different platforms. Possible are: "linux", "windows" or "darwin" but a default concourse setup only provides Linux workers. You have to specify a platform in every task definition even if there are no other workers then Linux workers in your setup. This is because pipeline definitions are not coupled to specific concourse setup.

Usually docker images are used to describe the inside of the container in which the task is executed.

Next comes the "image_resource" (annotated with "Docker Image" in the image above) within the task definition. The image_resource is easy to use. Understanding what concourse does with that configuration in detail is a bit more complicated. It's important to know that the tasks are not executed on the concourse workers directly. Instead they get executed within a
container.
Whenever a task should be executed a new container is launched for this purpose. A container image is used to describe the inside of a container (the file structure and programs that are available to the processes started within that container). In the hello world pipeline we are using
the "ubuntu" docker image from the official docker hub registry.

The possibility to specify a docker image for each task allows to decouple the task from the worker. So the task is not limited to the tools installed on the worker VM.

By using images, it is furthermore possible that two different tasks use the same dependency in different versions.

Using Docker images, a pipeline developer can install all required dependencies by themselves without asking a operator to install the dependencies on the concourse servers.

For example: You might have two java applications, one requires JRE 1.6 and the other requires JRE 1.7. To execute the unit tests for each of your applications you would create two concourse tasks. By using different Docker images for the tasks it's quite easy to run one task with JRE 1.6 and the other with 1.7 without installing both versions of the JRE on the worker VM itself. You just have to specify a docker image that includes the appropriate JRE version.

Last but not least there is a "run" block nested in the task block. The run block contains the path to a command which should be executed inside the specified container. In the hello world example, we are using a command sitting in a directory specified in the $PATH variable. So we don't need to specify the whole path. The command line arguments which should be passed to the command are specified separately in the "args" array within the run block. In the hello world example, we just pass one command line argument, which is "Hello World".

To execute multiple commands within one task you can use the following pattern:

Not Quite Best Practices

I use this for small scripts or to quickly try something out. See section
Refactor Pipeline
to get an idea on how to manage large scripts properly.

In this example a yaml feature (called
literal block)
is used.
Notice the "|" character that is used to introduce such a literal block.

In YAML a literal block is a multi line string. In this example the litteral block contains a shell script consisting of the three commands 'whoami', 'date' and 'echo "Hello World"'.

Hello Wold with Resources

Let's extend the hello world pipeline with two resources to get a feeling how to use such concourse.ci resources. We'll extend the hello world pipeline in three steps:

Pro Tip

Always make baby steps to the desired result and evaluate each step is working. This saves you a lot of time of troubleshooting and searching syntax errors within your yaml files.

We'll use the concourse git-resource to observe a git repository on GitHub. When there is a new commit the pipeline should fetch the new commit and just list the files within the git repository so that we see the file structure of the git repository in the concourse web interface.

Once we finished step one we change the pipeline to deploy the code from the git repository to a
Cloud Foundry
installation (instead of listing the content of the repository).

Per default a concourse pipeline is configured so that changes on a resource (git repo in this example) don't trigger an execution of a pipeline. We'll change this behaviour in the last step of this section and check how this changes the visual representation of the pipeline in the concourse.ci web interface.

Step 1 - Fetch Git Resource

To complete the first step, create an empty yml file (e.g. simple_deploy.yml) within your ci directory and insert the following content:

A simple pipeline using a git resource.

In the resource block on the top the "app_sources" resource is defined. This identifier is then used across the yml specification to refer to the resource.

Changes to the hello world pipeline are bolded. The first block is a resources block where the resources used in the pipeline get declared and configured. The only configuration we provide for the resource is the uri and of course the type of the resource. Further we choose an identifier for the resource so that we can refer to the resource in the jobs and tasks. In this example the resource identifier is "app_sources".

Have a look at the
official concourse.ci git resource
for more configuration options. For example, it is also possible to specify an username and password or a certificate if you want to use private repositories.

In the build plan of the job we then say "get: app_sources". This step will check whether the latest version of the resource is available in the concourse.ci cache and it will fetch the latest version if not. So that the resource is available for subsequent steps.

Not every step necessarily requires access to every resource which is fetched in a job. This is why you further must specifiy that a task requires a specific resource as input. Concourse will then provide a directory inside the task container which contains the resource. This directory is named as the resource identifier ("app_sources" in this example).

When you run a ls command in the task you'll see that there is a app_source directory. When you run "ls ./app_source" you'll see the content of the resource.

To upload the pipeline definition to the concourse.ci server use the following command:

Open the concourse web interface and go to
/teams/main/pipelines/simple-deploy.
You can also navigate to a pipeline by clicking on the
symbol in the upper left corner. The pipeline looks like this:

We did not yet configure the resource to trigger builds of the dependent jobs when there are new versions of the resource (the broken line between resource and job indicates this). This means we must start the build manually. We already did that in the section "First simple pipeline without deploying anything". Have a look at the animated gif there to unpause the pipeline and start a build of the job or use the following CLI commands instead:

Unpause and start the pipeline using the fly CLI instead of the web interface

Once the build completed you should see the following output in the job overview.

Step 2 - Deploy the app to Cloud Foundry

In the first step we used the concourse.ci git resource as "input resource" in this step we are going to use an "output resource". But this time we'll not use an git resource instead we'll use the concourse.ci Cloud Foundry resource to push the application to a Cloud Foundry instance.

In this step we'll modify the pipeline definition form step 1 like this:

Bad Practice!!!

Don't write passwords and other secrets directly into the pipeline yml. We'll refactor this later!

In this example we added another resource to the resources block. The new resource type is "cf" instead of "git". The concourse cf resource is made to deploy applications to Cloud Foundry very easy. To do so it is required to specify the credentials of a Cloud Foundry instance in the resource config.

To finally deploy the application we added the put step to the "simple-deploy" job.

Why do we not need to specify an input for the put step like we had to do for the task to list the directory content? The
concourse documenations
says:

All artefacts collected during the plan's execution will be available in the working directory.

This means for the put steps we don't need explicitly tell concourse which inputs we need we just get everything.

When you update the pipeline via the fly CLI the pipeline should look like this (the image shows the pipeline after a build succeeded):

Step 3: Trigger Builds Automatically

So far we have to trigger the builds manually even if there are new commits in the git repo. To configure the pipeline so that concourse triggers new builds automatically as soon it detects new commits just add the following line to the pipeline definition (The bold line):

There are still a few thing missing here which would prevent me from using this pipeline:

Unit tests are not executed

Credentials are in the pipeline definition

The application encounters downtimes when concourse performs a deployment

Deploy in test environment first and then to production

Refactoring pipelines regarding best practices

These things will be fixed in the next sections.

Pipe the first service to production (Java App)

The UAA is our open source guinea pig for this section. It's a useful general purpose component written in Java.

In this section we are going to specify a pipeline which tests, builds and deploys a Java application/service. The application we are going to use is the UAA.

UAA stands for User Account and Authentication and is a service that was developed as a component of Cloud Foundry. We are not going to deploy Cloud Foundry in this tutorial. The good thing about the UAA service is that this micorservice is a general purpose service without any Cloud Foundry specific logic. So we are going to
reuse this service in our own microservice architecture.
The UAA implements two popular protocols:
OAuth 2.0
and
SCIM.

At least the OAuth 2.0 protocol is very common and often used in microservices to realize Human to Machine and Machine to Machine authentification/authorization.

Create a Docker image to provide all dependencies required to run the UAA unit tests.

Specify a concourse job to run the UAA unit tests and build a war file.

Store the war file to Artifactory.

Create a PostgreSQL instance in the test and production environment.

Specify a concourse job to deploy the UAA to the test environment.

Send an email once a new version of the UAA is deployed to the test environment.

Specify a concourse job to deploy the UAA to the production environment.

Step 1 - Setup Git Repo for the pipeline and create files

The official best practice to store the pipeline code is to store it in the same source code repository as the actual application code. In our example this means we have to store it in the UAA GitHub repository. But since we are not the owner of this repository we'll don't do this. Instead we create a dedicated Git repository to store the UAA pipeline.

Step 2 - Create and Publish a Docker Image

You could simply use the link to the public Docker image I prepared and uploaded to my docker hub account but since this is a recurring step in building a pipeline I will not keep this details from you.

Using a free Docker hub account you can't upload private Docker images. This is ok for the tutorial.

We'll use a Dockerfile to describe the Docker image. Once we've described the Docker image in a Dockerfile everyone can create the image out of this Dockerfile. Another benefit of dockerfiles is that we can also put it into a source control system (git in our example).

To be fair, this is a very simple Dockerfile and we could have used the "java:8-jdk" directly since we don't add any further dependencies to the image. If you have some other dependencies required to test and build the Docker image you would specify them in this Docker file.

For now, we just specify that our Docker image inherits from the "java:8-jdk" image.

Let's build an image out of the Dockerfile by running the following command:

In the following command you must replace "wolfoliver" with your Docker Hub account.

If you don't specify a tag_filter every commit will be deployed, even if the commit doesn't has a tag at all.

You can also specify '3.*.*' to deploy every version that begins with a "3".

At the resources block we specify the UAA directory as the source where the code should be fetched from. Another option we specify for the git resource is tag_filter: '3.6.*'. This tells concourse that only
commits tagged with a tag
matching the specified
glob
will trigger the pipeline. The pipeline will only deploy UAA versions which have a git tag like '3.6.0', '3.6.1', '3.6.2', '3.6.3', '3.6.4', ..., '3.6.12', ... and so on. But also version like '3.6.4.rc1' will be deployed.

The next configuration I want to talk about is the outputs config:

outputs:
- name: uaa_war

That's how we move content from one step to the other. Without doing so every file created in a step will be deleted when the task is finished unless the tasks moves the file to specified output directory. In this pipeline we are doing this in the last line:

mv uaa/build/libs/cloudfoundry-identity-uaa-*.war ../uaa_war

The other commands in the run script run the unit tests and build the war file (cloudfoundry-identity-uaa-*.war).

Of course you should not skip the unit tests for real world projects! This is just to be able to process with the tutorial without spending to much time with troubleshooting.

Step 4a: Setup local Artifactory server

If you don't have an Account to a production Artifactory server you can setup one by your own on your local machine in order to follow this tutorial. This sections briefly explains how to do it.

In this step we'll upload the created war file to
Artifactory.
Artifactory is a wildly used open source system to software packages. We'll store the created war file to Artifactory to download them in later jobs and deploy them to the test and production environment. In this way we don't have to build the war file for each environment again. Instead we just download the war file from Artifactory.

If you have running your concourse setup locally with vagrant, you also can setup Artifactory locally. There is a Docker image to get it running quickly. Start your Docker engine and execute:

To check whether Artifactory runs find out the IP address of the VM where your docker engine runs on. It's probably not localhost because on Mac OS and Windows the Docker engine runs in a VM (the VM runs on your local machine).

To find out the Docker engine IP address run:

Find out the IP address where your Artifactory server runs.

concourse-ci-tutorial$
docker-machine ip default
192.168.99.100

If the docker machine IP is 192.168.99.100 you can check whether Artifactory is running by opening
http://192.168.99.100:8081
in your browser.

Recap: We already know that custom concourse resources are realized as Docker image.

At the top we see the whole new section resource_types. We need to specify this because Artifactory can't be used as a "build in resource" in concourse ci. The artifactory-resource is an external resource and we have to define where the logic of this resource type is located. With logic is meant the code that describes how to upload artefacts to Artifactory, how to check whether there are new versions of an artefact available and how to download specific artefact versions from Artifactory. Concourse requires this logic to be encapsulated in a Docker image. Thats why we specify a Docker image here (The Docker image is pivotalservices/artifactory-resource).

At the end of the pipeline definition we add a "put" step to the build plan of the single job we have. There we specify which file should be uploaded to the Artifactory repository.

When you update the pipeline using the fly set-pipeline command, the pipeline should look like this:

Step 5: Create RDBMS instances

To run the UAA you need a relational database (PostgreSQL, MySQL) where the UAA can store its data.

For cheap testing purposes

You don't necessarily need to create two database servers to follow this tutorial.

If you don't have the possibility to create two database instance you can also run the UAA without a persistent database service and instead use
HSQLDB.
HSQLDB is a lightweight database that can be integrated into other Java applications. With such a setup the UAA will write it's data to the file system where the UAA server is running.
But in this case the setup is not very usable in real live for the following reasons:

You can't scale the UAA to more then one instance. This is because (by using Cloud Foundry or a PaaS in general) each instance gets it's own file system and it is not synced.

All data is lost when the UAA instance gets restarted. This is because the file system of the application instances is ephemeral when using a PaaS. Read about
12facotors applications
(especially the section about
stateless apps
)
to understand why this is handled this way.

Anyway, when you decide to go with HSQLDB you still have to ensure that the right Cloud Foundry application spaces are available. This is described next.

In order to create a database instance, you have to login to your Cloud Foundry provide of choice. To do so you can use the cf CLI:

In this example we are using anynines as Cloud Foundry provider. Replace https://api. aws.ie.a9s.eu if you want to use another one.

The output above only shows the relevant section of the anynines marketplace. We need to create a PostgreSQL/MySQL instance in both environments (Cloud Foundry spaces). To change the Cloud Foundry space and create a PostgreSQL instance run:

Switch to the test and production space and create a PostgreSQL in each space.

Like the Artifactory resource the
concourse.ci email resource
is also a none build in resource and so we have to specify a new resource type. And like in the resource type for the Artifactory resource we have to specify where the docker image is located which contains the logic to send emails.

In the resource section SMTP settings must be provided. In the pipeline example I'm using my gmail account to send emails.

The on_success, on_failure and do keywords are new statements which we did not use before.

Because we want to send out different emails when building the UAA *.war file step fails or succeed we have to specify two different branches in the build plan. Using the on_success statement in a task we can execute another step only when the task succeed (the same for on_failure).

In our case we want to execute two steps when the build task succeed/failed. The first step is a task to create the email content, the second step is a put to send out the email. To be able to specify two steps in the on_success/on_failure hook we must use the
do
statement.

Step 8: Specify a concourse job to deploy the UAA to the production environment

In this final step of this section we want to deploy the UAA application to the production environment. But this time not automatically like we do in the test environment. To do this extend the pipeline with the following code:

Code that needs to be added so that the pipeline deploys to production.

Because the pipeline definition gets longer now you only see the sections that needs to be added and not the complete pipeline.

When you extend the pipeline with the job above it should look like this:

Note that there is a broken line between the "uaa-build" resource and the "deploy-to-production". This is because you have open the job and start a build manually if you want to deploy a new UAA version to production.

Not Finshed Yet

The last two chapter are not written yet and because writing this tutorial was quite time-consuming so far I would like to get some feedback first before I continue.

If you think the last two chapter are worth to get written just drop me a message on
twitter.