image and services

This allows to specify a custom Docker image and a list of services that can be
used for time of the job. The configuration of this feature is covered in
a separate document.

before_script

before_script is used to define the command that should be run before all
jobs, including deploy jobs, but after the restoration of artifacts. This can
be an array or a multi-line string.

after_script

Introduced in GitLab 8.7 and requires Gitlab Runner v1.2

after_script is used to define the command that will be run after for all
jobs. This has to be an array or a multi-line string.

Note:
The before_script and the main script are concatenated and run in a single context/container.
The after_script is run separately, so depending on the executor, changes done
outside of the working tree might not be visible, e.g. software installed in the
before_script.

stages

stages is used to define stages that can be used by jobs.
The specification of stages allows for having flexible multi stage pipelines.

The ordering of elements in stages defines the ordering of jobs' execution:

Jobs of the same stage are run in parallel.

Jobs of the next stage are run after the jobs from the previous stage
complete successfully.

Let's consider the following example, which defines 3 stages:

stages:-build-test-deploy

First, all jobs of build are executed in parallel.

If all jobs of build succeed, the test jobs are executed in parallel.

If all jobs of test succeed, the deploy jobs are executed in parallel.

If all jobs of deploy succeed, the commit is marked as passed.

If any of the previous jobs fails, the commit is marked as failed and no
jobs of further stage are executed.

There are also two edge cases worth mentioning:

If no stages are defined in .gitlab-ci.yml, then the build,
test and deploy are allowed to be used as job's stage by default.

If a job doesn't specify a stage, the job is assigned the test stage.

types

Deprecated, and could be removed in one of the future releases. Use stages instead.

variables

Introduced in GitLab Runner v0.5.0.

GitLab CI allows you to add variables to .gitlab-ci.yml that are set in the
job environment. The variables are stored in the Git repository and are meant
to store non-sensitive project configuration, for example:

variables:DATABASE_URL:"postgres://postgres@postgres/my_database"

Note:
Integers (as well as strings) are legal both for variable's name and value.
Floats are not legal and cannot be used.

These variables can be later used in all executed commands and scripts.
The YAML-defined variables are also set to all created service containers,
thus allowing to fine tune them. Variables can be also defined on a
job level.

Except for the user defined variables, there are also the ones set up by the
Runner itself. One example would be CI_COMMIT_REF_NAME which has the value of
the branch or tag name for which project is built. Apart from the variables
you can set in .gitlab-ci.yml, there are also the so called secret variables
which can be set in GitLab's UI.

The default key is default across the project, therefore everything is
shared between each pipelines and jobs by default, starting from GitLab 9.0.

Note: The cache:key variable cannot contain the / character, or the equivalent URI encoded %2F; a value made only of dots (., %2E) is also forbidden.

Example configurations

To enable per-job caching:

cache:key:"$CI_JOB_NAME"untracked:true

To enable per-branch caching:

cache:key:"$CI_COMMIT_REF_NAME"untracked:true

To enable per-job and per-branch caching:

cache:key:"$CI_JOB_NAME-$CI_COMMIT_REF_NAME"untracked:true

To enable per-branch and per-stage caching:

cache:key:"$CI_JOB_STAGE-$CI_COMMIT_REF_NAME"untracked:true

If you use Windows Batch to run your shell scripts you need to replace
$ with %:

cache:key:"%CI_JOB_STAGE%-%CI_COMMIT_REF_NAME%"untracked:true

If you use Windows PowerShell to run your shell scripts you need to replace
$ with $env::

cache:key:"$env:CI_JOB_STAGE-$env:CI_COMMIT_REF_NAME"untracked:true

cache:policy

Introduced in GitLab 9.4.

The default behaviour of a caching job is to download the files at the start of
execution, and to re-upload them at the end. This allows any changes made by the
job to be persisted for future runs, and is known as the pull-push cache
policy.

If you know the job doesn't alter the cached files, you can skip the upload step
by setting policy: pull in the job specification. Typically, this would be
twinned with an ordinary cache job at an earlier stage to ensure the cache
is updated from time to time:

This helps to speed up job execution and reduce load on the cache server,
especially when you have a large number of cache-using jobs executing in
parallel.

Additionally, if you have a job that unconditionally recreates the cache without
reference to its previous contents, you can use policy: push in that job to
skip the download step.

Jobs

.gitlab-ci.yml allows you to specify an unlimited number of jobs. Each job
must have a unique name, which is not one of the keywords mentioned above.
A job is defined by a list of parameters that define the job behavior.

script

script is a shell script which is executed by the Runner. For example:

job:script:"bundleexecrspec"

This parameter can also contain several commands using an array:

job:script:-uname -a-bundle exec rspec

Sometimes, script commands will need to be wrapped in single or double quotes.
For example, commands that contain a colon (:) need to be wrapped in quotes so
that the YAML parser knows to interpret the whole thing as a string rather than
a "key: value" pair. Be careful when using special characters:
:, {, }, [, ], ,, &, *, #, ?, |, -, <, >, =, !, %, @, `.

stage

stage allows to group jobs into different stages. Jobs of the same stage
are executed in parallel. For more info about the use of stage please check
stages.

only and except (simplified)

only and except are two parameters that set a job policy to limit when
jobs are created:

only defines the names of branches and tags for which the job will run.

except defines the names of branches and tags for which the job will
not run.

There are a few rules that apply to the usage of job policy:

only and except are inclusive. If both only and except are defined
in a job specification, the ref is filtered by only and except.

only and except allow the use of regular expressions.

only and except allow to specify a repository path to filter jobs for
forks.

In addition, only and except allow the use of special keywords:

Value

Description

branches

When a branch is pushed.

tags

When a tag is pushed.

api

When pipeline has been triggered by a second pipelines API (not triggers API).

tags

tags is used to select specific Runners from the list of all Runners that are
allowed to run this project.

During the registration of a Runner, you can specify the Runner's tags, for
example ruby, postgres, development.

tags allow you to run jobs with Runners that have the specified tags
assigned to them:

job:tags:-ruby-postgres

The specification above, will make sure that job is built by a Runner that
has both ruby AND postgres tags defined.

allow_failure

allow_failure is used when you want to allow a job to fail without impacting
the rest of the CI suite. Failed jobs don't contribute to the commit status.

When enabled and the job fails, the pipeline will be successful/green for all
intents and purposes, but a "CI build passed with warnings" message will be
displayed on the merge request or commit or job page. This is to be used by
jobs that are allowed to fail, but where failure indicates some other (manual)
steps should be taken elsewhere.

In the example below, job1 and job2 will run in parallel, but if job1
fails, it will not stop the next stage from running, since it's marked with
allow_failure: true:

Manual actions

Manual actions are a special type of job that are not executed automatically;
they need to be explicitly started by a user. Manual actions can be started
from pipeline, build, environment, and deployment views.

Manual actions can be either optional or blocking. Blocking manual action will
block execution of the pipeline at stage this action is defined in. It is
possible to resume execution of the pipeline when someone executes a blocking
manual actions by clicking a play button.

When pipeline is blocked it will not be merged if Merge When Pipeline Succeeds
is set. Blocked pipelines also do have a special status, called manual.

Manual actions are non-blocking by default. If you want to make manual action
blocking, it is necessary to add allow_failure: false to the job's definition
in .gitlab-ci.yml.

Optional manual actions have allow_failure: true set by default.

Statuses of optional actions do not contribute to overall pipeline status.

Manual actions are considered to be write actions, so permissions for
protected branches are used when user wants to trigger an action. In other
words, in order to trigger a manual action assigned to a branch that the
pipeline is running for, user needs to have ability to merge to this branch.

In the above example we set up the review_app job to deploy to the review
environment, and we also defined a new stop_review_app job under on_stop.
Once the review_app job is successfully finished, it will trigger the
stop_review_app job based on what is defined under when. In this case we
set it up to manual so it will need a manual action via
GitLab's web interface in order to run.

The stop_review_app job is required to have the following keywords defined:

The deploy as review app job will be marked as deployment to dynamically
create the review/$CI_COMMIT_REF_NAME environment, where $CI_COMMIT_REF_NAME
is an environment variable set by the Runner. The
$CI_ENVIRONMENT_SLUG variable is based on the environment name, but suitable
for inclusion in URLs. In this case, if the deploy as review app job was run
in a branch named pow, this environment would be accessible with an URL like
https://review-pow.example.com/.

This of course implies that the underlying server which hosts the application
is properly configured.

artifacts is used to specify a list of files and directories which should be
attached to the job after success. You can only use paths that are within the
project workspace. To pass artifacts between different jobs, see dependencies.
Below are some examples.

The artifacts will be sent to GitLab after the job finishes successfully and will
be available for download in the GitLab UI.

artifacts:name

Introduced in GitLab 8.6 and GitLab Runner v1.1.0.

The name directive allows you to define the name of the created artifacts
archive. That way, you can have a unique name for every archive which could be
useful when you'd like to download the archive from GitLab. The artifacts:name
variable can make use of any of the predefined variables.
The default name is artifacts, which becomes artifacts.zip when downloaded.

Example configurations

To create an archive with a name of the current job:

job:artifacts:name:"$CI_JOB_NAME"

To create an archive with a name of the current branch or tag including only
the files that are untracked by Git:

job:artifacts:name:"$CI_COMMIT_REF_NAME"untracked:true

To create an archive with a name of the current job and the current branch or
tag including only the files that are untracked by Git:

artifacts:when

artifacts:when is used to upload artifacts on job failure or despite the
failure.

artifacts:when can be set to one of the following values:

on_success - upload artifacts only when the job succeeds. This is the default.

on_failure - upload artifacts only when the job fails.

always - upload artifacts regardless of the job status.

Example configurations

To upload artifacts only when job fails.

job:artifacts:when:on_failure

artifacts:expire_in

Introduced in GitLab 8.9 and GitLab Runner v1.3.0.

artifacts:expire_in is used to delete uploaded artifacts after the specified
time. By default, artifacts are stored on GitLab forever. expire_in allows you
to specify how long artifacts should live before they expire, counting from the
time they are uploaded and stored on GitLab.

You can use the Keep button on the job page to override expiration and
keep artifacts forever.

After expiry, artifacts are actually deleted hourly by default (via a cron job),
but they are not accessible after expiry.

The value of expire_in is an elapsed time. Examples of parseable values:

'3 mins 4 sec'

'2 hrs 20 min'

'2h20min'

'6 mos 1 day'

'47 yrs 6 mos and 4d'

'3 weeks and 2 days'

Example configurations

To expire artifacts 1 week after being uploaded:

job:artifacts:expire_in:1 week

dependencies

Introduced in GitLab 8.6 and GitLab Runner v1.1.1.

This feature should be used in conjunction with artifacts and
allows you to define the artifacts to pass between different jobs.

To use this feature, define dependencies in context of the job and pass
a list of all previous jobs from which the artifacts should be downloaded.
You can only define jobs from stages that are executed before the current one.
An error will be shown if you define jobs from the current stage or next ones.
Defining an empty array will skip downloading any artifacts for that job.
The status of the previous job is not considered when using dependencies, so
if it failed or it is a manual job that was not run, no error occurs.

In the following example, we define two jobs with artifacts, build:osx and
build:linux. When the test:osx is executed, the artifacts from build:osx
will be downloaded and extracted in the context of the build. The same happens
for test:linux and artifacts from build:linux.

The job deploy will download artifacts from all previous jobs because of
the stage precedence:

coverage

coverage allows you to configure how code coverage will be extracted from the
job output.

Regular expressions are the only valid kind of value expected here. So, using
surrounding / is mandatory in order to consistently and explicitly represent
a regular expression string. You must escape special characters if you want to
match them literally.

retry

retry allows you to configure how many times a job is going to be retried in
case of a failure.

When a job fails, and has retry configured it is going to be processed again
up to the amount of times specified by the retry keyword.

If retry is set to 2, and a job succeeds in a second run (first retry), it won't be retried
again. retry value has to be a positive integer, equal or larger than 0, but
lower or equal to 2 (two retries maximum, three runs in total).

A simple example:

test:script:rspecretry:2

Git Strategy

Introduced in GitLab 8.9 as an experimental feature. May change or be removed
completely in future releases. GIT_STRATEGY=none requires GitLab Runner
v1.7+.

You can set the GIT_STRATEGY used for getting recent application code, either
in the global variables section or the variables
section for individual jobs. If left unspecified, the default from project
settings will be used.

There are three possible values: clone, fetch, and none.

clone is the slowest option. It clones the repository from scratch for every
job, ensuring that the project workspace is always pristine.

variables:GIT_STRATEGY:clone

fetch is faster as it re-uses the project workspace (falling back to clone
if it doesn't exist). git clean is used to undo any changes made by the last
job, and git fetch is used to retrieve commits made since the last job ran.

variables:GIT_STRATEGY:fetch

none also re-uses the project workspace, but skips all Git operations
(including GitLab Runner's pre-clone script, if present). It is mostly useful
for jobs that operate exclusively on artifacts (e.g., deploy). Git repository
data may be present, but it is certain to be out of date, so you should only
rely on files brought into the project workspace from cache or artifacts.

variables:GIT_STRATEGY:none

Git Checkout

Introduced in GitLab Runner 9.3

The GIT_CHECKOUT variable can be used when the GIT_STRATEGY is set to either
clone or fetch to specify whether a git checkout should be run. If not
specified, it defaults to true. Like GIT_STRATEGY, it can be set in either the
global variables section or the variables
section for individual jobs.

If set to false, the Runner will:

when doing fetch - update the repository and leave working copy on
the current revision,

when doing clone - clone the repository and leave working copy on the
default branch.

Having this setting set to true will mean that for both clone and fetch
strategies the Runner will checkout the working copy to a revision related
to the CI pipeline:

Git Submodule Strategy

Requires GitLab Runner v1.10+.

The GIT_SUBMODULE_STRATEGY variable is used to control if / how Git
submodules are included when fetching the code before a build. Like
GIT_STRATEGY, it can be set in either the global variables
section or the variables section for individual jobs.

There are three possible values: none, normal, and recursive:

none means that submodules will not be included when fetching the project
code. This is the default, which matches the pre-v1.10 behavior.

normal means that only the top-level submodules will be included. It is
equivalent to:

git submodule syncgit submodule update --init

recursive means that all submodules (including submodules of submodules)
will be included. It is equivalent to:

git submodule sync --recursivegit submodule update --init --recursive

Note that for this feature to work correctly, the submodules must be configured
(in .gitmodules) with either:

the HTTP(S) URL of a publicly-accessible repository, or

a relative path to another repository on the same GitLab server. See the
Git submodules documentation.

Job stages attempts

Introduced in GitLab, it requires GitLab Runner v1.9+.

You can set the number for attempts the running job will try to execute each
of the following stages:

Variable

Description

GET_SOURCES_ATTEMPTS

Number of attempts to fetch sources running a job

ARTIFACT_DOWNLOAD_ATTEMPTS

Number of attempts to download artifacts running a job

RESTORE_CACHE_ATTEMPTS

Number of attempts to restore the cache running a job

The default is one single attempt.

Example:

variables:GET_SOURCES_ATTEMPTS:3

You can set them in the global variables section or the
variables section for individual jobs.

Shallow cloning

Introduced in GitLab 8.9 as an experimental feature. May change in future
releases or be removed completely.

You can specify the depth of fetching and cloning using GIT_DEPTH. This allows
shallow cloning of the repository which can significantly speed up cloning for
repositories with a large number of commits or old, large binaries. The value is
passed to git fetch and git clone.

Note:
If you use a depth of 1 and have a queue of jobs or retry
jobs, jobs may fail.

Since Git fetching and cloning is based on a ref, such as a branch name, Runners
can't clone a specific commit SHA. If there are multiple jobs in the queue, or
you are retrying an old job, the commit to be tested needs to be within the
Git history that is cloned. Setting too small a value for GIT_DEPTH can make
it impossible to run these old commits. You will see unresolved reference in
job logs. You should then reconsider changing GIT_DEPTH to a higher value.

Jobs that rely on git describe may not work correctly when GIT_DEPTH is
set since only part of the Git history is present.

To fetch or clone only the last 3 commits:

variables:GIT_DEPTH:"3"

Hidden keys (jobs)

Introduced in GitLab 8.6 and GitLab Runner v1.1.1.

If you want to temporarily 'disable' a job, rather than commenting out all the
lines where the job is defined:

#hidden_job:# script:# - run test

you can instead start its name with a dot (.) and it will not be processed by
GitLab CI. In the following example, .hidden_job will be ignored:

.hidden_job:script:-run test

Use this feature to ignore jobs, or use the
special YAML features and transform the hidden keys
into templates.

Special YAML features

It's possible to use special YAML features like anchors (&), aliases (*)
and map merging (<<), which will allow you to greatly reduce the complexity
of .gitlab-ci.yml.

Anchors

Introduced in GitLab 8.6 and GitLab Runner v1.1.1.

YAML has a handy feature called 'anchors', which lets you easily duplicate
content across your document. Anchors can be used to duplicate/inherit
properties, and is a perfect example to be used with hidden keys
to provide templates for your jobs.

The following example uses anchors and map merging. It will create two jobs,
test1 and test2, that will inherit the parameters of .job_template, each
having their own custom script defined:

.job_template:&job_definition# Hidden key that defines an anchor named 'job_definition'image:ruby:2.1services:-postgres-redistest1:<<:*job_definition# Merge the contents of the 'job_definition' aliasscript:-test1 projecttest2:<<:*job_definition# Merge the contents of the 'job_definition' aliasscript:-test2 project

& sets up the name of the anchor (job_definition), << means "merge the
given hash into the current one", and * includes the named anchor
(job_definition again). The expanded version looks like this:

Let's see another one example. This time we will use anchors to define two sets
of services. This will create two jobs, test:postgres and test:mysql, that
will share the script directive defined in .job_template, and the services
directive defined in .postgres_services and .mysql_services respectively: