The pipeline is organised into sections and levels. The first
section of the pipeline is given level 1. Sub tasks of that section start
with level 1.1 and so on. Log files and job definitions will refer to
actions using the level. Details of the action can then be accessed using
the level as the location: job/8360/definition#2.4.5

A string which can be used to relate the descriptive device-type name to a
particular list of aliases which could be used to lookup the matching
device-type. This can be useful to list the device tree blobs
which can be used with this device-type. (Aliases can be used in job
submissions directly.)

BMC

A Baseboard Management Controller (BMC) is an embedded controller
on a computer mainboard which allows external monitoring and
management of the computer system.

Continous Integration (CI) typically involves repeated automated
submissions using automated builds of the artifacts prompted by
modifications made by developers. Providing feedback to the developers on
whether the automated build passed or failed creates a loop. LAVA is
designed as one component of a ci loop.

The device dictionary holds data which is specific to one device within a
group of devices of the same device type. For example, the power control
commands which reference a single port number. The dictionary itself is a
key:value store within the LAVA server database which admins can modify to
set configuration values according to the pipeline design.

A record of when a device changed Device state, who caused the
transition, when the transition took place as well as any message assigned
to the transition. Individual transitions can be viewed in LAVA at
<server>scheduler/transition/<ID> where the ID is a sequential integer.
If the transition was caused by a job, this view will link to that job.

device tag

A tag is a device specific label which describes specific hardware
capabilities of this specific device. Test jobs using tags will fail if no
suitable devices exist matching the requested device tag or tags. Tags are
typically used when only a proportion of the devices of the specified type
have hardware support for a particular feature, possibly because those
devices have peripheral hardware connected or enabled. A device tag can
only be created or assigned to a particular device by a lab admin. When
requesting tags, remember to include a description of what the tagged
device can provide to a Test Job.

The common type of a number of devices in LAVA. The device type may have a
health check defined. Devices with the same device type will run
the same health check at regular intervals. See Device types.

developer image

A build of Android which, when deployed to a device, means that the device
is visible to adb. Devices configured this way will be able to have
the image replaced using any machine, just be connecting a suitable cable,
so these images are not typically deployed onto hardware which will be sold
to the customer without having this image replaced with a production image.

A machine to which multiple devices are connected. The dispatcher has
lava-dispatcher installed and passes the commands to the device and
other processes involved in running the LAVA test. A dispatcher does not
need to be at the same location as the server which runs the scheduler. The
term dispatcher relates to how the machine operates the
lava-dispatch process using lava-slave. The related term
worker relates to how the machine appears from the master.

distributed deployment

A method of installing LAVA involving a single master and one or
more remote workers which communicate with the
master using ZMQ. This method spreads the load of running tests on
devices multiple dispatchers.

lava-server provides a generic frontend consisting of the Results,
Queries, Job tables, Device tables and Charts. Many projects will need to
customise this data to make it directly relevant to the developers. This is
supported using the XML-RPC and REST API support.

LAVA uses the Django local group configuration (synchronising
Django groups with external groups like LDAP is not supported).
Users can be added to groups after the specified group has been
created by admins using the Django administration interface or the
lava-servermanagegroups and lava-servermanageusers
command line support.

hacking session

A test job which uses a particular type of test definition to allow users
to connect to a test device and interact with the test environment
directly. Normally implemented by installing and enabling an SSH daemon
inside the test image. Not all devices can support hacking sessions.

A test job for one specific device type which is automatically run
at regular intervals to ensure that the physical device is capable of
performing the minimum range of tasks. If the health check fails on a
particular device, LAVA will automatically put that device Offline.
Health checks have higher priority than any other jobs.

A device type can be hidden by the LAVA administrators. Devices of a
Hidden device types will only be visible to owners of at least
once device of this type. Other users will not be able to access the job
output, device status transition pages or bundle streams of devices of a
hidden type. Devices of a hidden type will be shown as Unavailable in
tables of test jobs and omitted from tables of devices and device types if
the user viewing the table does not own any devices of the hidden type.

hostname

The unique name of this device in this LAVA instance, used to link all
jobs, results and device information to a specific device configuration.

inline

A type of test definition which is contained within the job submission
instead of being fetched from a URL. These are useful for debugging tests
and are recommended for the synchronisation support within
multinode test jobs.

An interface tag is similar to device tag but operate solely
within the VLANd support. An interface tag may be related to the
link speed which is achievable on a particular switch and port - it may
also embed information about that link.

Jinja2 is a templating language for Python, modelled after Django’s
templates. It is used in LAVA for device-type configuration, as it allows
conditional logic and variable substitution when generating device
configuration for the dispatcher.

Test job definitions can include the context: dictionary at the top
level. This is used to set values for selected variables in the device
configuration, subject to the administrator settings for the device
templates and device dictionary. A common example is to instruct the template to use the
qemu-system-x86_64 executable when starting a QEMU test job using the
value arch:amd64. All device types support variables in the job
context.

The original YAML submitted to create a job in LAVA is retained in the
database and can be viewed directly from the job log. Although the YAML is
the same, the YAML may well have changed since the job was submitted, so
some care is required when modifying job definitions from old jobs to make
a new submission. If the job was a MultiNode job, the MultiNode
definition will be the unchanged YAML from the original submission; the job
definition will be the parsed YAML for this particular device within the
MultiNode job.

LAVA_LXC_HOME

The path within lxc set to /lava-lxc by default. From the host
machine this path would be something like
/var/lib/lxc/{container-name}/rootfs/lava-lxc. Any files downloaded by
to: download will be copied to this location which can then be
accessible from within the container.

LXC

Linux containers are used in LAVA to
allow custom configurations on the dispatcher for each use. The extra
utilities or services are transparently available to the pipeline code and
selected device nodes can also be made available, depending on admin
configuration of the devices.

This is a URL scheme specific to LAVA which points to files available in
LAVA_LXC_HOME. An URL like lxc:///boot.img will refer to
/var/lib/lxc/{container-name}/rootfs/lava-lxc/boot.img on the host or
/lava-lxc/boot.img within the lxc. This URL scheme is valid
only when LXC protocol is defined in the test job. It also
only makes sense for the deploy and boot actions.

Note

Pay attention to 3 forward slashes in the URL when referring to a
file.

The master is a server machine with lava-server installed and it
optionally supports one or more remote workers

messageID

Each message sent using the MultiNode API uses a messageID which
is a string, unique within the group. It is recommended to make these
strings descriptive using underscores instead of spaces. The messageID will
be included the the log files of the test.

metadata

Test jobs should include metadata relating to the files used within the
job. Metadata consists of a key and a value, there is no limit to the
number of key value pairs as long as each key is unique within the metadata
for that test job.

A simple text label which is used to tie related actions together within a
test job submission where multiple deploy, boot or test actions are
defined. A common use case for namespaces is the use of lxc in a
test job where some actions are to be executed inside the LXC and some on
the DUT. The namespace is used to store the temporary locations of
files and other dynamic data during the running of the test job so that,
for example, the test runner is able to execute the correct test definition
YAML. Namespaces are set in the test job submission.

A status of a device which allows jobs to be submitted and reserved for the
device but where the jobs will not start to run until the device is online.
Devices enter the offline state when a health check fails on that device or
the administrator puts the device offline.

PDU is an abbreviation for Power Distribution Unit - a network-controlled
set of relays which allow the power to the devices to be turned off and on
remotely. Many PDUs are supported by pdudaemon to be able to
hard reset devices in LAVA.

physical access

The user or group with physical access to the device, for example to fix a
broken SD card or check for possible problems with physical connections.
The user or group with physical access is recommended to be one of the
superusers.

pipeline

Within LAVA, the pipeline is the V2 model for the dispatcher code where
submitted jobs are converted to a pipeline of discrete actions - each
pipeline is specific to the structure of that submission and the entire
pipeline is validated before the job starts. The model integrates concepts
like fail-early, error identification, avoid defaults, fail and diagnose
later, as well as giving test writers more rope to make LAVA more
transparent. See Lava Dispatcher Design and Advanced Use Cases.

priority

A job has a default priority of Medium. This means that the job will be
scheduled according to the submit time of the job, in a list of jobs of the
same priority. Every health check has a higher priority than any
submitted job and if a health check is required, it will always run
before any other jobs. Priority only has any effect while the job is queued
as Submitted.

production image

A build of Android which, when deployed to a device, means that the device is
not visible to adb. This is typically how a device is configured when
first sold to the consumer.

A list of prompt strings which the test writer needs to specify in advance
and which LAVA will use to determine whether the boot was successful. One of
the specified prompts must match before the test can be started.

protocol

A protocol in LAVA is a method of interacting with external services using
an API instead of with direct
shell commands or via a test shell. Examples of services in LAVA which use
protocols include LXC, MultiNode and VLANd. The
protocol defines which API calls are available through the LAVA interface
and the Pipeline determines when the API call is made.

A restricted device can only accept job submissions from the device owner.
If the device owner is a group, all users in that group can submit jobs to
the device.

results

LAVA results provide a generic view of how the tests performed within a
test job. Results from test jobs provide support for queries, charts and downloading results to support later analysis and frontends. Results can be viewed whilst the test job is running. Results
are also generated during the operation of the test job outside the test
action itself. All results are referenced solely using the test job ID.

A device is retired when it can no longer be used by LAVA. A retired device
allows historical data to be retained in the database, including log files,
result bundles and state transitions. Devices can also be retired when the
device is moved from one instance to another.

role

An arbitrary label used in MultiNode tests to determine which tests are run
on the devices and inside the YAML to determine how the devices
communicate.

rootfs

A tarball for the root file system.

rootfstype

Filesystem type for the root filesystem, e.g. ext2, ext3, ext4.

scheduler

There is a single scheduler in LAVA, running on the master. The
scheduler is responsible for assigning devices to submitted test jobs.

In MultiNode, the single submission is split into multiple test
jobs which all share a single target_group which uses a string as a
unique ID. The target_group is usually transparent to test writers but
underpins how the rest of the MultiNode API operates.

test case

An individual test case records a single test event as a pass or fail
along with measurements, units or a reference.

The result from a single test definition execution. The individual id and
result of a single test within a test run is called the Test Case.

test shell

Most test jobs will boot into a POSIX type shell, much like if the user had
used ssh. LAVA uses the test shell to execute the tests defined in the
Lava Test Shell Definition(s) specified in the job definition.

test set

Test writers can choose to subdivide a single test suite into
multiple sets, for example to handle repetition or changes to the
parameters used to run the tests.

Individual test cases are aggregated into a test suite and given the name
specified in the test job definition. The Test Suite is created when
results are generated in the running test job. LAVA uses a reserved test
suite called lava for results generated by the actions running the test
job itself. Results in the lava suite contain details like the commit
hash of the test definitions, messages from exceptions raised if the job
ends Incomplete and other data about how the test behaved.

Trivial File Transfer Protocol (TFTP) is a file transfer protocol, mainly
to serve boot images over the network to other machines (e.g. for PXE
booting). The protocol is managed by the tftpd-hpa package and not by LAVA directly.

Supports values of public, personal and group and
controls who is allowed to view the job and the results generated
by the job. This includes whether the results are available to
queries and to charts:

visibility:personal

or:

visibility:public

group visibility setting should list the groups users must be
in to be allowed to see the job. If more than one group is
listed, users must be in all the listed groups to be able to view
the job or the results:

visibility:group:-developers-project

In this example, users must be members of both developers group
and project group. Groups must already exist in the Django
configuration for the instance.

VLANd

VLANd is a daemon to support virtual local area networks in LAVA. This
support is specialised and requires careful configuration of the entire
LAVA instance, including the physical layout of the switches and the
devices of that instance.

The worker is responsible for running the lava-slave daemon to start
and monitor test jobs running on the dispatcher. Each master has a
worker installed by default. When a dispatcher is added to the master as a
separate machine, this worker is a remote worker. The admin decides
how many devices to assign to which worker. In large instances, it is
common for all devices to be assigned to remote workers to manage the load
on the master.

ZMQ

Zero MQ (or 0MQ) is the basis of
the refactoring to solve a lot of the problems inherent in the
distributed_instance. The detail of this change is only relevant to
developers but it allows LAVA to remove the need for postgresql and
sshfs connections between the master and remote workers. It allows
remote workers to no longer need lava-server to be installed on the
worker. Developers can find more information in the
Lava Dispatcher Design documentation.