Zuul is a distributed system consisting of several components, each of
which is described below.

Each of the Zuul processes may run on the same host, or different
hosts. Within Zuul, the components communicate with the scheduler via
the Gearman protocol, so each Zuul component needs to be able to
connect to the host running the Gearman server (the scheduler has a
built-in Gearman server which is recommended) on the Gearman port –
TCP port 4730 by default.

The Zuul scheduler communicates with Nodepool via the ZooKeeper
protocol. Nodepool requires an external ZooKeeper cluster, and the
Zuul scheduler needs to be able to connect to the hosts in that
cluster on TCP port 2181.

Both the Nodepool launchers and Zuul executors need to be able to
communicate with the hosts which nodepool provides. If these are on
private networks, the Executors will need to be able to route traffic
to them.

If statsd is enabled, every service needs to be able to emit data to
statsd. Statsd can be configured to run on each host and forward
data, or services may emit to a centralized statsd collector. Statsd
listens on UDP port 8125 by default.

All Zuul processes read the /etc/zuul/zuul.conf file (an alternate
location may be supplied on the command line) which uses an INI file
syntax. Each component may have its own configuration file, though
you may find it simpler to use the same file for all components.

A minimal Zuul system may consist of a Scheduler and
Executor both running on the same host. Larger installations
should consider running multiple executors, each on a dedicated host,
and running mergers on dedicated hosts as well.

The scheduler is the primary component of Zuul. The scheduler is not
a scalable component; one, and only one, scheduler must be running at
all times for Zuul to be operational. It receives events from any
connections to remote systems which have been configured, enqueues
items into pipelines, distributes jobs to executors, and reports
results.

The scheduler includes a Gearman server which is used to communicate
with other components of Zuul. It is possible to use an external
Gearman server, but the built-in server is well-tested and
recommended. If the built-in server is used, other Zuul hosts will
need to be able to connect to the scheduler on the Gearman port, TCP
port 4730. It is also strongly recommended to use SSL certs with
Gearman, as secrets are transferred from the scheduler to executors
over this link.

The scheduler must be able to connect to the ZooKeeper cluster used by
Nodepool in order to request nodes. It does not need to connect
directly to the nodes themselves, however – that function is handled
by the Executors.

It must also be able to connect to any services for which connections
are configured (Gerrit, GitHub, etc).

To start the scheduler, run zuul-scheduler. To stop it, kill the
PID which was saved in the pidfile specified in the configuration.

Most of Zuul’s configuration is automatically updated as changes to
the repositories which contain it are merged. However, Zuul must be
explicitly notified of changes to the tenant config file, since it is
not read from a git repository. To do so, send the scheduler PID
(saved in the pidfile specified in the configuration) a SIGHUP
signal.

Mergers are an optional Zuul service; they are not required for Zuul
to operate, but some high volume sites may benefit from running them.
Zuul performs quite a lot of git operations in the course of its work.
Each change that is to be tested must be speculatively merged with the
current state of its target branch to ensure that it can merge, and to
ensure that the tests that Zuul perform accurately represent the
outcome of merging the change. Because Zuul’s configuration is stored
in the git repos it interacts with, and is dynamically evaluated, Zuul
often needs to perform a speculative merge in order to determine
whether it needs to perform any further actions.

All of these git operations add up, and while Zuul executors can also
perform them, large numbers may impact their ability to run jobs.
Therefore, administrators may wish to run standalone mergers in order
to reduce the load on executors.

Mergers need to be able to connect to the Gearman server (usually the
scheduler host) as well as any services for which connections are
configured (Gerrit, GitHub, etc).

Executors are responsible for running jobs. At the start of each job,
an executor prepares an environment in which to run Ansible which
contains all of the git repositories specified by the job with all
dependent changes merged into their appropriate branches. The branch
corresponding to the proposed change will be checked out (in all
projects, if it exists). Any roles specified by the job will also be
present (also with dependent changes merged, if appropriate) and added
to the Ansible role path. The executor also prepares an Ansible
inventory file with all of the nodes requested by the job.

The executor also contains a merger. This is used by the executor to
prepare the git repositories used by jobs, but is also available to
perform any tasks normally performed by standalone mergers. Because
the executor performs both roles, small Zuul installations may not
need to run standalone mergers.

Executors need to be able to connect to the Gearman server (usually
the scheduler host), any services for which connections are configured
(Gerrit, GitHub, etc), as well as directly to the hosts which Nodepool
provides.

The executor runs playbooks in one of two execution contexts depending
on whether the project containing the playbook is a
config-project or an untrusted-project. If the
playbook is in a config project, the executor runs the playbook in the
trusted execution context, otherwise, it is run in the untrusted
execution context.

Both execution contexts use bubblewrap[1] to create a
namespace to ensure that playbook executions are isolated and are unable
to access files outside of a restricted environment. The administrator
may configure additional local directories on the executor to be made
available to the restricted environment.

The trusted execution context has access to all Ansible features,
including the ability to load custom Ansible modules. Needless to
say, extra scrutiny should be given to code that runs in a trusted
context as it could be used to compromise other jobs running on the
executor, or the executor itself, especially if the administrator has
granted additional access through bubblewrap, or a method of escaping
the restricted environment created by bubblewrap is found.

Playbooks run in the untrusted execution context are not permitted to
load additional Ansible modules or access files outside of the
restricted environment prepared for them by the executor. In addition
to the bubblewrap environment applied to both execution contexts, in
the untrusted context some standard Ansible modules are replaced with
versions which prohibit some actions, including attempts to access
files outside of the restricted execution context. These redundant
protections are made as part of a defense-in-depth strategy.

Directory that Zuul should use to hold temporary job directories.
When each job is run, a new entry will be created under this
directory to hold the configuration and scratch workspace for
that job. It will be deleted at the end of the job (unless the
–keep-jobdir command line option is specified).

This should be on the same filesystem as executor.git_dir
so that when git repos are cloned into the job workspaces, they
can be hard-linked to the local git cache.

Path to an Ansible variables file to supply site-wide variables.
This should be a YAML-formatted file consisting of a single
dictionary. The contents will be made available to all jobs as
Ansible variables. These variables take precedence over all
other forms (job variables and secrets). Care should be taken
when naming these variables to avoid potential collisions with
those used by jobs. Prefixing variable names with a
site-specific identifier is recommended. The default is not to
add any site-wide variables. See the User’s Guide for more information.

Name of the execution wrapper to use when executing
ansible-playbook. The default, bubblewrap is recommended for
all installations.

There is also a nullwrap driver for situations where one wants
to run Zuul without access to bubblewrap or in such a way that
bubblewrap may interfere with the jobs themselves. However,
nullwrap is considered unsafe, as bubblewrap provides
significant protections against malicious users and accidental
breakage in playbooks. As such, nullwrap is not recommended
for use in production.

This option, and thus, nullwrap, may be removed in the future.
bubblewrap has become integral to securely operating Zuul. If you
have a valid use case for it, we encourage you to let us know.

When an executor host gets too busy, the system may suffer
timeouts and other ill effects. The executor will stop accepting
more than 1 job at a time until load has lowered below a safe
level. This level is determined by multiplying the number of
CPU’s by load_multiplier.

So for example, if the system has 2 CPUs, and load_multiplier
is 2.5, the safe load for the system is 5.00. Any time the
system load average is over 5.00, the executor will quit
accepting multiple jobs at one time.

The executor will observe system load and determine whether
to accept more jobs every 30 seconds.

This is the minimum percentage of system RAM available. The
executor will stop accepting more than 1 job at a time until
more memory is available. The available memory percentage is
calculated from the total available memory divided by the
total real memory multiplied by 100. Buffers and cache are
considered available in the calculation.

The executor needs to know its hostname under which it is reachable by
zuul-web. Otherwise live console log streaming doesn’t work. In most cases
This is automatically detected correctly. But when running in environments
where it cannot determine its hostname correctly this can be overridden
here.

The Zuul web server currently acts as a websocket interface to live log
streaming. Eventually, it will serve as the single process handling all
HTTP interactions with Zuul.

Web servers need to be able to connect to the Gearman server (usually
the scheduler host). If the SQL reporter is used, they need to be
able to connect to the database it reports to in order to support the
dashboard. If a GitHub connection is configured, they need to be
reachable by GitHub so they may receive notifications.

The Zuul finger gateway listens on the standard finger port (79) for
finger requests specifying a build UUID for which it should stream log
results. The gateway will determine which executor is currently running that
build and query that executor for the log stream.

This is intended to be used with the standard finger command line client.
For example:

fingerUUID@zuul.example.com

The above would stream the logs for the build identified by UUID.

Finger gateway servers need to be able to connect to the Gearman
server (usually the scheduler host), as well as the console streaming
port on the executors (usually 7900).