Auto DevOps

Auto DevOps provides pre-defined CI/CD configuration allowing you to automatically
detect, build, test, deploy, and monitor your applications. Leveraging CI/CD
best practices and tools, Auto DevOps aims to simplify the setup and execution
of a mature and modern software development lifecycle.

Overview

You can spend a lot of effort to set up the workflow and processes required to
build, deploy, and monitor your project. It gets worse when your company has
hundreds, if not thousands, of projects to maintain. With new projects
constantly starting up, the entire software development process becomes
impossibly complex to manage.

Auto DevOps provides you a seamless software development process by
automatically detecting all dependencies and language technologies required to
test, build, package, deploy, and monitor every project with minimal
configuration. Automation enables consistency across your projects, seamless
management of processes, and faster creation of new projects: push your code,
and GitLab does the rest, improving your productivity and efficiency.

Enabled by default

Auto DevOps is enabled by default for all projects and attempts to run on all pipelines
in each project. An instance administrator can enable or disable this default in the
Auto DevOps settings.
Auto DevOps automatically disables in individual projects on their first pipeline failure,
if it has not been explicitly enabled for the project.

Comparison to application platforms and PaaS

Auto DevOps provides features often included in an application
platform or a Platform as a Service (PaaS). It takes inspiration from the
innovative work done by Heroku and goes beyond it
in multiple ways:

Auto DevOps works with any Kubernetes cluster; you're not limited to running
on GitLab's infrastructure. (Note that many features also work without Kubernetes).

There is no additional cost (no markup on the infrastructure costs), and you
can use a Kubernetes cluster you host or Containers as a Service on any
public cloud (for example, Google Kubernetes Engine).

Auto DevOps has more features including security testing, performance testing,
and code quality testing.

Auto DevOps offers an incremental graduation path. If you need advanced customizations,
you can start modifying the templates without starting over on a
completely different platform. Review the customizing documentation for more information.

Features

Comprised of a set of stages, Auto DevOps brings these best practices to your
project in a simple and automatic way:

NGINX Ingress. You can deploy it to your Kubernetes cluster by installing
the GitLab-managed app for Ingress,
after configuring GitLab's Kubernetes integration in the previous step.

Alternatively, you can use the
nginx-ingress
Helm chart to install Ingress manually.

NOTE: Note:
If you use your own Ingress instead of the one provided by GitLab's managed
apps, ensure you're running at least version 0.9.0 of NGINX Ingress and
enable Prometheus metrics
for the response metrics to appear. You must also
annotate
the NGINX Ingress deployment to be scraped by Prometheus using
prometheus.io/scrape: "true" and prometheus.io/port: "10254".

You need a domain configured with wildcard DNS, which all of your Auto DevOps
applications will use. If you're using the
GitLab-managed app for Ingress,
the URL endpoint is automatically configured for you.

Your Runner must be configured to run Docker, usually with either the
Docker
or Kubernetes executors, with
privileged mode enabled.
The Runners don't need to be installed in the Kubernetes cluster, but the
Kubernetes executor is easy to use and automatically autoscales.
You can configure Docker-based Runners to autoscale as well, using
Docker Machine.

Runners should be registered as shared Runners
for the entire GitLab instance, or specific Runners
that are assigned to specific projects (the default if you've installed the
GitLab Runner managed application).

To enable Auto Monitoring, you need Prometheus installed either inside or
outside your cluster, and configured to scrape your Kubernetes cluster.
If you've configured GitLab's Kubernetes integration, you can deploy it to
your cluster by installing the
GitLab-managed app for Prometheus.

To enable HTTPS endpoints for your application, you must install cert-manager,
a native Kubernetes certificate management controller that helps with issuing
certificates. Installing cert-manager on your cluster issues a
Let’s Encrypt certificate and ensures the
certificates are valid and up-to-date. If you've configured GitLab's Kubernetes
integration, you can deploy it to your cluster by installing the
GitLab-managed app for cert-manager.

Auto DevOps requires a wildcard DNS A record matching the base domain(s). For
a base domain of example.com, you'd need a DNS entry like:

*.example.com 3600 A 1.2.3.4

In this case, the deployed applications are served from example.com, and 1.2.3.4
is the IP address of your load balancer; generally NGINX (see requirements).
Setting up the DNS record is beyond the scope of this document; check with your
DNS provider for information.

Alternatively, you can use free public services like nip.io
which provide automatic wildcard DNS without any configuration. For nip.io,
set the Auto DevOps base domain to 1.2.3.4.nip.io.

After completing setup, all requests hit the load balancer, which routes requests
to the Kubernetes pods running your application.

Enabling/Disabling Auto DevOps

When first using Auto DevOps, review the requirements to ensure
all the necessary components to make full use of Auto DevOps are available. First-time
users should follow the quick start guide.

GitLab.com users can enable or disable Auto DevOps only at the project level.
Self-managed users can enable or disable Auto DevOps at the project, group, or
instance level.

At the project level

If enabling, check that your project does not have a .gitlab-ci.yml, or if one exists, remove it.

At the group level

Only administrators and group owners can enable or disable Auto DevOps at the group level.

When enabling or disabling Auto DevOps at group level, group configuration is
implicitly used for the subgroups and projects inside that group, unless Auto DevOps
is specifically enabled or disabled on the subgroup or project.

To enable or disable Auto DevOps at the group level:

Go to your group's {settings}Settings > CI/CD > Auto DevOps page.

Select the Default to Auto DevOps pipeline checkbox to enable it.

Click Save changes for the changes to take effect.

At the instance level (Administrators only)

Even when disabled at the instance level, group owners and project maintainers can still enable
Auto DevOps at the group and project level, respectively.

Continuous deployment to production using timed incremental rollout: Sets the
INCREMENTAL_ROLLOUT_MODE variable
to timed. Production deployments execute with a 5 minute delay between
each increment in rollout.

Using multiple Kubernetes clusters (PREMIUM)

When using Auto DevOps, you can deploy different environments to
different Kubernetes clusters, due to the 1:1 connection
existing between them.

The template
used by Auto DevOps currently defines 3 environment names:

review/ (every environment starting with review/)

staging

production

Those environments are tied to jobs using Auto Deploy, so
except for the environment scope, they must have a different deployment domain.
You must define a separate KUBE_INGRESS_BASE_DOMAIN variable for each of the above
based on the environment.

The following table is an example of how to configure the three different clusters:

Cluster name

Cluster environment scope

KUBE_INGRESS_BASE_DOMAIN variable value

Variable environment scope

Notes

review

review/*

review.example.com

review/*

The review cluster which runs all Review Apps. * is a wildcard, used by every environment name starting with review/.

staging

staging

staging.example.com

staging

(Optional) The staging cluster which runs the deployments of the staging environments. You must enable it first.

production

production

example.com

production

The production cluster which runs the production environment deployments. You can use incremental rollouts.

To add a different cluster for each environment:

Navigate to your project's {cloud-gear}Operations > Kubernetes.

Create the Kubernetes clusters with their respective environment scope, as
described from the table above.

After creating the clusters, navigate to each cluster and install Helm Tiller
and Ingress. Wait for the Ingress IP address to be assigned.

Navigate to each cluster's page, through {cloud-gear}Operations > Kubernetes,
and add the domain based on its Ingress IP address.

After completing configuration, you can test your setup by creating a merge request
and verifying your application is deployed as a Review App in the Kubernetes
cluster with the review/* environment scope. Similarly, you can check the
other environments.

Currently supported languages

Note that not all buildpacks support Auto Test yet, as it's a relatively new
enhancement. All of Heroku's
officially supported languages
support Auto Test. The languages supported by Heroku's Herokuish buildpacks all
support Auto Test, but notably the multi-buildpack does not.

If your application needs a buildpack that is not in the above list, you
might want to use a custom buildpack.

Limitations

The following restrictions apply.

Private registry support

No documented way of using private container registry with Auto DevOps exists.
We strongly advise using GitLab Container Registry with Auto DevOps to
simplify configuration and prevent any unforeseen issues.

Installing Helm behind a proxy

GitLab does not support installing Helm as a GitLab-managed App when
behind a proxy. Users who want to do so must inject their proxy settings
into the installation pods at runtime, such as by using a
PodPreset:

Your application may be missing the key files the buildpack is looking for.
Ruby applications require a Gemfile to be properly detected,
even though it's possible to write a Ruby app without a Gemfile.