NOTE: Note:
If you have a mirrored repository where GitLab pulls from,
you may need to enable pipeline triggering in your project's
Settings > Repository > Pull from a remote repository > Trigger pipelines for mirror updates.

Jobs

The YAML file defines a set of jobs with constraints stating when they should
be run. You can specify an unlimited number of jobs which are defined as
top-level elements with an arbitrary name and always have to contain at least
the script clause.

The above example is the simplest possible CI/CD configuration with two separate
jobs, where each of the jobs executes a different command.
Of course a command can execute code directly (./configure;make;make install)
or run a script (test.sh) in the repository.

Jobs are picked up by Runners and executed within the
environment of the Runner. What is important, is that each job is run
independently from each other.

Each job must have a unique name, but there are a few reserved keywords that
cannot be used as job names:

image

services

stages

types

before_script

after_script

variables

cache

A job is defined by a list of parameters that define the job behavior.

.tests in this example is a hidden key, but it's
possible to inherit from regular jobs as well.

extends supports multi-level inheritance, however it is not recommended to
use more than three levels. The maximum nesting level that is supported is 10.
The following example has two levels of inheritance:

image and services

This allows to specify a custom Docker image and a list of services that can be
used for time of the job. The configuration of this feature is covered in
a separate document.

before_script and after_script

Introduced in GitLab 8.7 and requires GitLab Runner v1.2

before_script is used to define the command that should be run before all
jobs, including deploy jobs, but after the restoration of artifacts.
This can be an array or a multi-line string.

after_script is used to define the command that will be run after for all
jobs, including failed ones. This has to be an array or a multi-line string.

The before_script and the main script are concatenated and run in a single context/container.
The after_script is run separately, so depending on the executor, changes done
outside of the working tree might not be visible, e.g. software installed in the
before_script.

It's possible to overwrite the globally defined before_script and after_script
if you set it per-job:

before_script:-global before scriptjob:before_script:-execute this instead of global before scriptscript:-my commandafter_script:-execute this after my script

stages

stages is used to define stages that can be used by jobs and is defined
globally.

The specification of stages allows for having flexible multi stage pipelines.
The ordering of elements in stages defines the ordering of jobs' execution:

Jobs of the same stage are run in parallel.

Jobs of the next stage are run after the jobs from the previous stage
complete successfully.

Let's consider the following example, which defines 3 stages:

stages:-build-test-deploy

First, all jobs of build are executed in parallel.

If all jobs of build succeed, the test jobs are executed in parallel.

If all jobs of test succeed, the deploy jobs are executed in parallel.

If all jobs of deploy succeed, the commit is marked as passed.

If any of the previous jobs fails, the commit is marked as failed and no
jobs of further stage are executed.

There are also two edge cases worth mentioning:

If no stages are defined in .gitlab-ci.yml, then the build,
test and deploy are allowed to be used as job's stage by default.

If a job doesn't specify a stage, the job is assigned the test stage.

stage

stage is defined per-job and relies on stages which is defined
globally. It allows to group jobs into different stages, and jobs of the same
stage are executed in parallel. For example:

types

CAUTION: Deprecated:types is deprecated, and could be removed in one of the future releases.
Use stages instead.

script

script is the only required keyword that a job needs. It's a shell script
which is executed by the Runner. For example:

job:script:"bundleexecrspec"

This parameter can also contain several commands using an array:

job:script:-uname -a-bundle exec rspec

Sometimes, script commands will need to be wrapped in single or double quotes.
For example, commands that contain a colon (:) need to be wrapped in quotes so
that the YAML parser knows to interpret the whole thing as a string rather than
a "key: value" pair. Be careful when using special characters:
:, {, }, [, ], ,, &, *, #, ?, |, -, <, >, =, !, %, @, `.

only and except (simplified)

only and except are two parameters that set a job policy to limit when
jobs are created:

only defines the names of branches and tags for which the job will run.

except defines the names of branches and tags for which the job will
not run.

There are a few rules that apply to the usage of job policy:

only and except are inclusive. If both only and except are defined
in a job specification, the ref is filtered by only and except.

only and except allow the use of regular expressions.

only and except allow to specify a repository path to filter jobs for
forks.

In addition, only and except allow the use of special keywords:

Value

Description

branches

When a branch is pushed.

tags

When a tag is pushed.

api

When pipeline has been triggered by a second pipelines API (not triggers API).

refs and kubernetes

only:variables

variables keyword is used to define variables expressions. In other words
you can use predefined variables / project / group or
environment-scoped variables to define an expression GitLab is going to
evaluate in order to decide whether a job should be created or not.

See the example below. Job is going to be created only when pipeline has been
scheduled or runs for a master branch, and only if kubernetes service is
active in the project.

In the scenario above, if you are pushing multiple commits to GitLab to an
existing branch, GitLab creates and triggers docker build job, provided that
one of the commits contains changes to either:

The Dockerfile file.

Any of the files inside docker/scripts/ directory.

Any of the files and subfolders inside dockerfiles directory.

CAUTION: Warning:
There are some caveats when using this feature with new branches and tags. See
the section below.

Using changes with new branches and tags

If you are pushing a new branch or a new tag to GitLab, the policy
always evaluates to true and GitLab will create a job. This feature is not
connected with merge requests yet, and because GitLab is creating pipelines
before an user can create a merge request we don't know a target branch at
this point.

Without a target branch, it is not possible to know what the common ancestor is,
thus we always create a job in that case. This feature works best for stable
branches like master because in that case GitLab uses the previous commit
that is present in a branch to compare against the latest SHA that was pushed.

tags

tags is used to select specific Runners from the list of all Runners that are
allowed to run this project.

During the registration of a Runner, you can specify the Runner's tags, for
example ruby, postgres, development.

tags allow you to run jobs with Runners that have the specified tags
assigned to them:

job:tags:-ruby-postgres

The specification above, will make sure that job is built by a Runner that
has both ruby AND postgres tags defined.

Tags are also a great way to run different jobs on different platforms, for
example, given an OS X Runner with tag osx and Windows Runner with tag
windows, the following jobs run on respective platforms:

allow_failure

allow_failure is used when you want to allow a job to fail without impacting
the rest of the CI suite. Failed jobs don't contribute to the commit status.
The default value is false.

When enabled and the job fails, the pipeline will be successful/green for all
intents and purposes, but a "CI build passed with warnings" message will be
displayed on the merge request or commit or job page. This is to be used by
jobs that are allowed to fail, but where failure indicates some other (manual)
steps should be taken elsewhere.

In the example below, job1 and job2 will run in parallel, but if job1
fails, it will not stop the next stage from running, since it's marked with
allow_failure: true:

Always execute cleanup_job as the last step in pipeline regardless of
success or failure.

Allow you to manually execute deploy_job from GitLab's UI.

when:manual

Notes:

Introduced in GitLab 8.10.

Blocking manual actions were introduced in GitLab 9.0.

Protected actions were introduced in GitLab 9.2.

Manual actions are a special type of job that are not executed automatically,
they need to be explicitly started by a user. An example usage of manual actions
would be a deployment to a production environment. Manual actions can be started
from the pipeline, job, environment, and deployment views. Read more at the
environments documentation.

Manual actions can be either optional or blocking. Blocking manual actions will
block the execution of the pipeline at the stage this action is defined in. It's
possible to resume execution of the pipeline when someone executes a blocking
manual action by clicking a play button.

When a pipeline is blocked, it will not be merged if Merge When Pipeline Succeeds
is set. Blocked pipelines also do have a special status, called manual.
Manual actions are non-blocking by default. If you want to make manual action
blocking, it is necessary to add allow_failure: false to the job's definition
in .gitlab-ci.yml.

Optional manual actions have allow_failure: true set by default and their
Statuses do not contribute to the overall pipeline status. So, if a manual
action fails, the pipeline will eventually succeed.

Manual actions are considered to be write actions, so permissions for
protected branches are used when
user wants to trigger an action. In other words, in order to trigger a manual
action assigned to a branch that the pipeline is running for, user needs to
have ability to merge to this branch.

when:delayed

Delayed job are for executing scripts after a certain period.
This is useful if you want to avoid jobs entering pending state immediately.

You can set the period with start_in key. The value of start_in key is an elapsed time in seconds, unless a unit is
provided. start_in key must be less than or equal to one hour. Examples of valid values include:

10 seconds

30 minutes

1 hour

When there is a delayed job in a stage, the pipeline will not progress until the delayed job has finished.
This means this keyword can also be used for inserting delays between different stages.

The timer of a delayed job starts immediately after the previous stage has completed.
Similar to other types of jobs, a delayed job's timer will not start unless the previous stage passed.

The following example creates a job named timed rollout 10% that is executed 30 minutes after the previous stage has completed:

In the above example we set up the review_app job to deploy to the review
environment, and we also defined a new stop_review_app job under on_stop.
Once the review_app job is successfully finished, it will trigger the
stop_review_app job based on what is defined under when. In this case we
set it up to manual so it will need a manual action via
GitLab's web interface in order to run.

The stop_review_app job is required to have the following keywords defined:

The deploy as review app job will be marked as deployment to dynamically
create the review/$CI_COMMIT_REF_NAME environment, where $CI_COMMIT_REF_NAME
is an environment variable set by the Runner. The
$CI_ENVIRONMENT_SLUG variable is based on the environment name, but suitable
for inclusion in URLs. In this case, if the deploy as review app job was run
in a branch named pow, this environment would be accessible with an URL like
https://review-pow.example.com/.

This of course implies that the underlying server which hosts the application
is properly configured.

Note that since cache is shared between jobs, if you're using different
paths for different jobs, you should also set a different cache:key
otherwise cache content can be overwritten.

cache:key

Introduced in GitLab Runner v1.0.0.

Since the cache is shared between jobs, if you're using different
paths for different jobs, you should also set a different cache:key
otherwise cache content can be overwritten.

The key directive allows you to define the affinity of caching between jobs,
allowing to have a single cache for all jobs, cache per-job, cache per-branch
or any other way that fits your workflow. This way, you can fine tune caching,
allowing you to cache data between different jobs or even different branches.

The cache:key variable can use any of the
predefined variables, and the default key, if not
set, is just literal default which means everything is shared between each
pipelines and jobs by default, starting from GitLab 9.0.

NOTE: Note:
The cache:key variable cannot contain the / character, or the equivalent
URI-encoded %2F; a value made only of dots (., %2E) is also forbidden.

For example, to enable per-branch caching:

cache:key:"$CI_COMMIT_REF_SLUG"paths:-binaries/

If you use Windows Batch to run your shell scripts you need to replace
$ with %:

cache:key:"%CI_COMMIT_REF_SLUG%"paths:-binaries/

cache:untracked

Set untracked: true to cache all files that are untracked in your Git
repository:

rspec:script:testcache:untracked:true

Cache all Git untracked files and files in binaries:

rspec:script:testcache:untracked:truepaths:-binaries/

cache:policy

Introduced in GitLab 9.4.

The default behaviour of a caching job is to download the files at the start of
execution, and to re-upload them at the end. This allows any changes made by the
job to be persisted for future runs, and is known as the pull-push cache
policy.

If you know the job doesn't alter the cached files, you can skip the upload step
by setting policy: pull in the job specification. Typically, this would be
twinned with an ordinary cache job at an earlier stage to ensure the cache
is updated from time to time:

artifacts:name

Introduced in GitLab 8.6 and GitLab Runner v1.1.0.

The name directive allows you to define the name of the created artifacts
archive. That way, you can have a unique name for every archive which could be
useful when you'd like to download the archive from GitLab. The artifacts:name
variable can make use of any of the predefined variables.
The default name is artifacts, which becomes artifacts.zip when downloaded.

NOTE: Note:
If your branch-name contains forward slashes
(e.g. feature/my-feature) it is advised to use $CI_COMMIT_REF_SLUG
instead of $CI_COMMIT_REF_NAME for proper naming of the artifact.

To create an archive with a name of the current job:

job:artifacts:name:"$CI_JOB_NAME"paths:-binaries/

To create an archive with a name of the current branch or tag including only
the binaries directory:

job:artifacts:name:"$CI_COMMIT_REF_NAME"paths:-binaries/

To create an archive with a name of the current job and the current branch or
tag including only the binaries directory:

job:artifacts:name:"$CI_JOB_NAME-$CI_COMMIT_REF_NAME"paths:-binaries/

To create an archive with a name of the current stage and branch name:

artifacts:untracked

artifacts:untracked is used to add all Git untracked files as artifacts (along
to the paths defined in artifacts:paths).

NOTE: Note:
To exclude the folders/files which should not be a part of untracked just
add them to .gitignore.

Send all Git untracked files:

artifacts:untracked:true

Send all Git untracked files and files in binaries:

artifacts:untracked:truepaths:-binaries/

artifacts:when

Introduced in GitLab 8.9 and GitLab Runner v1.3.0.

artifacts:when is used to upload artifacts on job failure or despite the
failure.

artifacts:when can be set to one of the following values:

on_success - upload artifacts only when the job succeeds. This is the default.

on_failure - upload artifacts only when the job fails.

always - upload artifacts regardless of the job status.

To upload artifacts only when job fails:

job:artifacts:when:on_failure

artifacts:expire_in

Introduced in GitLab 8.9 and GitLab Runner v1.3.0.

expire_in allows you to specify how long artifacts should live before they
expire and therefore deleted, counting from the time they are uploaded and
stored on GitLab. If the expiry time is not defined, it defaults to the
instance wide setting
(30 days by default, forever on GitLab.com).

You can use the Keep button on the job page to override expiration and
keep artifacts forever.

After their expiry, artifacts are deleted hourly by default (via a cron job),
and are not accessible anymore.

The value of expire_in is an elapsed time in seconds, unless a unit is
provided. Examples of parsable values:

NOTE: Note:
In case the JUnit tool you use exports to multiple XML files, you can specify
multiple test report paths within a single job and they will be automatically
concatenated into a single file. Use a filename pattern (junit: rspec-*.xml),
an array of filenames (junit: [rspec-1.xml, rspec-2.xml, rspec-3.xml]), or a
combination thereof (junit: [rspec.xml, test-results/TEST-*.xml]).

To use this feature, define dependencies in context of the job and pass
a list of all previous jobs from which the artifacts should be downloaded.
You can only define jobs from stages that are executed before the current one.
An error will be shown if you define jobs from the current stage or next ones.
Defining an empty array will skip downloading any artifacts for that job.
The status of the previous job is not considered when using dependencies, so
if it failed or it is a manual job that was not run, no error occurs.

In the following example, we define two jobs with artifacts, build:osx and
build:linux. When the test:osx is executed, the artifacts from build:osx
will be downloaded and extracted in the context of the build. The same happens
for test:linux and artifacts from build:linux.

The job deploy will download artifacts from all previous jobs because of
the stage precedence:

coverage

coverage allows you to configure how code coverage will be extracted from the
job output.

Regular expressions are the only valid kind of value expected here. So, using
surrounding / is mandatory in order to consistently and explicitly represent
a regular expression string. You must escape special characters if you want to
match them literally.

retry

retry allows you to configure how many times a job is going to be retried in
case of a failure.

When a job fails and has retry configured, it is going to be processed again
up to the amount of times specified by the retry keyword.

If retry is set to 2, and a job succeeds in a second run (first retry), it won't be retried
again. retry value has to be a positive integer, equal or larger than 0, but
lower or equal to 2 (two retries maximum, three runs in total).

A simple example to retry in all failure cases:

test:script:rspecretry:2

By default, a job will be retried on all failure cases. To have a better control
on which failures to retry, retry can be a hash with with the following keys:

max: The maximum number of retries.

when: The failure cases to retry.

To retry only runner system failures at maximum two times:

test:script:rspecretry:max:2when:runner_system_failure

If there is another failure, other than a runner system failure, the job will
not be retried.

To retry on multiple failure cases, when can also be an array of failures:

local to the same repository, referenced by using full paths in the same
repository, with / being the root directory. For example:

# Within the repositoryinclude:'/templates/.gitlab-ci-template.yml'

NOTE: Note:
You can only use files that are currently tracked by Git on the same branch
your configuration file is. In other words, when using a local file, make
sure that both .gitlab-ci.yml and the local file are on the same branch.

NOTE: Note:
We don't support the inclusion of local files through Git submodules paths.

remote in a different location, accessed using HTTP/HTTPS, referenced
using the full URL. For example:

NOTE: Note:
The remote file must be publicly accessible through a simple GET request, as we don't support authentication schemas in the remote URL.

Since GitLab 10.8 we are now recursively merging the files defined in include
with those in .gitlab-ci.yml. Files defined by include are always
evaluated first and recursively merged with the content of .gitlab-ci.yml, no
matter the position of the include keyword. You can take advantage of
recursive merging to customize and override details in included CI
configurations with local definitions.

The following example shows specific YAML-defined variables and details of the
production job from an include file being customized in .gitlab-ci.yml.

In this case, the variables POSTGRES_USER and POSTGRES_PASSWORD along
with the environment url of the production job defined in
autodevops-template.yml have been overridden by new values defined in
.gitlab-ci.yml.

NOTE: Note:
Recursive includes are not supported meaning your external files
should not use the include keyword, as it will be ignored.

Recursive merging lets you extend and override dictionary mappings, but
you cannot add or modify items to an included array. For example, to add
an additional item to the production job script, you must repeat the
existing script items.

In this case, if install_dependencies and deploy were not repeated in
.gitlab-ci.yml, they would not be part of the script for the production
job in the combined CI configuration.

NOTE: Note:
We currently do not support using YAML aliases across different YAML files
sourced by include. You must only refer to aliases in the same file. Instead
of using YAML anchors you can use extends keyword.

variables

Introduced in GitLab Runner v0.5.0.

NOTE: Note:
Integers (as well as strings) are legal both for variable's name and value.
Floats are not legal and cannot be used.

GitLab CI/CD allows you to define variables inside .gitlab-ci.yml that are
then passed in the job environment. They can be set globally and per-job.
When the variables keyword is used on a job level, it overrides the global
YAML variables and predefined ones.

They are stored in the Git repository and are meant to store non-sensitive
project configuration, for example:

variables:DATABASE_URL:"postgres://postgres@postgres/my_database"

These variables can be later used in all executed commands and scripts.
The YAML-defined variables are also set to all created service containers,
thus allowing to fine tune them.

To turn off global defined variables in a specific job, define an empty hash:

job_name:variables:{}

Except for the user defined variables, there are also the ones set up by the
Runner itself.
One example would be CI_COMMIT_REF_NAME which has the value of
the branch or tag name for which project is built. Apart from the variables
you can set in .gitlab-ci.yml, there are also the so called
Variables
which can be set in GitLab's UI.

Git strategy

Introduced in GitLab 8.9 as an experimental feature. May change or be removed
completely in future releases. GIT_STRATEGY=none requires GitLab Runner
v1.7+.

You can set the GIT_STRATEGY used for getting recent application code, either
globally or per-job in the variables section. If left
unspecified, the default from project settings will be used.

There are three possible values: clone, fetch, and none.

clone is the slowest option. It clones the repository from scratch for every
job, ensuring that the project workspace is always pristine.

variables:GIT_STRATEGY:clone

fetch is faster as it re-uses the project workspace (falling back to clone
if it doesn't exist). git clean is used to undo any changes made by the last
job, and git fetch is used to retrieve commits made since the last job ran.

variables:GIT_STRATEGY:fetch

none also re-uses the project workspace, but skips all Git operations
(including GitLab Runner's pre-clone script, if present). It is mostly useful
for jobs that operate exclusively on artifacts (e.g., deploy). Git repository
data may be present, but it is certain to be out of date, so you should only
rely on files brought into the project workspace from cache or artifacts.

variables:GIT_STRATEGY:none

Git submodule strategy

Requires GitLab Runner v1.10+.

The GIT_SUBMODULE_STRATEGY variable is used to control if / how Git
submodules are included when fetching the code before a build. You can set them
globally or per-job in the variables section.

There are three possible values: none, normal, and recursive:

none means that submodules will not be included when fetching the project
code. This is the default, which matches the pre-v1.10 behavior.

normal means that only the top-level submodules will be included. It is
equivalent to:

git submodule syncgit submodule update --init

recursive means that all submodules (including submodules of submodules)
will be included. This feature needs Git v1.8.1 and later. When using a
GitLab Runner with an executor not based on Docker, make sure the Git version
meets that requirement. It is equivalent to:

git submodule sync --recursivegit submodule update --init --recursive

Note that for this feature to work correctly, the submodules must be configured
(in .gitmodules) with either:

the HTTP(S) URL of a publicly-accessible repository, or

a relative path to another repository on the same GitLab server. See the
Git submodules documentation.

Git checkout

Introduced in GitLab Runner 9.3

The GIT_CHECKOUT variable can be used when the GIT_STRATEGY is set to either
clone or fetch to specify whether a git checkout should be run. If not
specified, it defaults to true. You can set them globally or per-job in the
variables section.

If set to false, the Runner will:

when doing fetch - update the repository and leave working copy on
the current revision,

when doing clone - clone the repository and leave working copy on the
default branch.

Having this setting set to true will mean that for both clone and fetch
strategies the Runner will checkout the working copy to a revision related
to the CI pipeline:

Shallow cloning

Introduced in GitLab 8.9 as an experimental feature. May change in future
releases or be removed completely.

You can specify the depth of fetching and cloning using GIT_DEPTH. This allows
shallow cloning of the repository which can significantly speed up cloning for
repositories with a large number of commits or old, large binaries. The value is
passed to git fetch and git clone.

Note:
If you use a depth of 1 and have a queue of jobs or retry
jobs, jobs may fail.

Since Git fetching and cloning is based on a ref, such as a branch name, Runners
can't clone a specific commit SHA. If there are multiple jobs in the queue, or
you are retrying an old job, the commit to be tested needs to be within the
Git history that is cloned. Setting too small a value for GIT_DEPTH can make
it impossible to run these old commits. You will see unresolved reference in
job logs. You should then reconsider changing GIT_DEPTH to a higher value.

Jobs that rely on git describe may not work correctly when GIT_DEPTH is
set since only part of the Git history is present.

Hidden keys (jobs)

If you want to temporarily 'disable' a job, rather than commenting out all the
lines where the job is defined:

#hidden_job:# script:# - run test

you can instead start its name with a dot (.) and it will not be processed by
GitLab CI. In the following example, .hidden_job will be ignored:

.hidden_job:script:-run test

Use this feature to ignore jobs, or use the
special YAML features and transform the hidden keys
into templates.

Anchors

Introduced in GitLab 8.6 and GitLab Runner v1.1.1.

YAML has a handy feature called 'anchors', which lets you easily duplicate
content across your document. Anchors can be used to duplicate/inherit
properties, and is a perfect example to be used with hidden keys
to provide templates for your jobs.

The following example uses anchors and map merging. It will create two jobs,
test1 and test2, that will inherit the parameters of .job_template, each
having their own custom script defined:

.job_template:&job_definition# Hidden key that defines an anchor named 'job_definition'image:ruby:2.1services:-postgres-redistest1:<<:*job_definition# Merge the contents of the 'job_definition' aliasscript:-test1 projecttest2:<<:*job_definition# Merge the contents of the 'job_definition' aliasscript:-test2 project

& sets up the name of the anchor (job_definition), << means "merge the
given hash into the current one", and * includes the named anchor
(job_definition again). The expanded version looks like this:

Let's see another one example. This time we will use anchors to define two sets
of services. This will create two jobs, test:postgres and test:mysql, that
will share the script directive defined in .job_template, and the services
directive defined in .postgres_services and .mysql_services respectively:

Triggers

Skipping jobs

If your commit message contains [ci skip] or [skip ci], using any
capitalization, the commit will be created but the pipeline will be skipped.

Validate the .gitlab-ci.yml

Each instance of GitLab CI has an embedded debug tool called Lint, which validates the
content of your .gitlab-ci.yml files. You can find the Lint under the page ci/lint of your
project namespace (e.g, http://gitlab-example.com/gitlab-org/project-123/-/ci/lint)

Using reserved keywords

If you get validation error when using specific values (e.g., true or false),
try to quote them, or change them to a different form (e.g., /bin/true).

Examples

Visit the examples README to see a list of examples using GitLab
CI with various languages.