Contents

About the the testimage class

The build system has the ability to run a series of automated tests for qemu images.

All the tests are actually commands run on the target system over ssh.

The tests themselves are written in Python, making use of the unittest module.

The class that enables this is testimage.bbclass (which handles loading the tests and starting the qemu image)

Enabling and running the tests

Requirements

You should be aware of the following:

runqemu script needs sudo access for setting up the tap interface, so you need to make sure it can do that non-interactively. That means you need to do one of the following:

add NOPASSWD for your user in /etc/sudoers either for ALL commands, either just for runqemu-ifup (but you need to provide the full path and that can change if you have multiple poky clones)

on some distributions you also need to comment out "Defaults requiretty" in /etc/sudoers

manually configure a tap interface for your system

run as root the script in scripts/runqemu-gen-tapdev which should generate a list of tap devices (that's usually done in AutoBuilder-like setups)

the DISPLAY variable needs to be set so that means you need to have an X server available (e.g start vncserver for a headless machine)

some of the tests (in particular smart tests) start a http server on a random high number port, used to serve files to the target. The smart module serves ${DEPLOY_DIR}/rpm so it can run smart channel commands. That means your host's firewall must accept incoming connections from 192.168.7.0/24 (the default class used for tap0 devices by runqemu)

Usage

To use it add "testimage" to global inherit and call your target image with -c testimage, like this:

for example build a qemu core-image-sato: bitbake core-image-sato

add INHERIT += "testimage" in local.conf

then call "bitbake core-image-sato -c testimage". That will run a standard suite of tests.

All test files are currently in meta/lib/oeqa/runtime. The file names themselves are the actual tests names we use, also called test modules. A module can have multiple classes and test methods, usually grouped together by the area tested (e.g: tests for systemd go in meta/lib/oeqa/runtime/systemd.py).

A layer can add its own tests in <meta-layer>/lib/oeqa/runtime, provided it extends BBPATH as normal in its layer.conf (test module names shouldn't collide though with those in core).

You can change the tests run by appending or overrding the TEST_SUITES variable in local.conf. Each name in TEST_SUITES represents a required test for the image. That means that no module skipping is allowed, even if the test isn't suitable for the image (e.g running the rpm tests on a image without rpm). Appending "auto" to TEST_SUITES means that it will try to run all tests that are suitable for the image (each test decides that on it's own).

Note that the order in TEST_SUITES is important (it's the order modules run) and it influences tests dependencies. That means that tests that depend on other tests (e.g ssh depends on the ping test) should be added last (there is no re-ordering/dependency handling by the test class, it just respects the order). Each module can have multiple classes with multiple test methods (and Python unittest rules apply here).

In short:

to run the default tests for core-image-sato you don't need to change TEST_SUITES (just call bitbake core-image-sato -c testimage like above)

As you can see some tests passed and some of them were skipped (because they weren't applicable for this image). And while I haven't added systemd tests to TEST_SUITES the tests were run (because of auto).

Let's see what happens if I use TEST_SUITES = "ping ssh gcc" for a core-image-sato image (which doesn't have the tools-sdk feature):

Q: why is there a . /etc/profile before each command? A: Because of the default PATH (/bin:/usr/bin) when running commands over ssh (the answer is a bit more complex, let's just say we need to source /etc/profile for extending PATH)

while it might look that the commands aren't properly escaped those ssh commands are actually run through Python's subprocess module with shell=False (so copy-paste of the commands in your shell won't work unless you properly escape them)

there is a default timeout of 300 seconds for each command (though a test can overwrite that or run a command with no timeout). There is no timeout for scp commands though.

the tests can use the return code and/or the output to decide if they fail/pass.

Writing new tests

All new test files should go in meta/lib/oeqa/runtime. The file names themselves are the actual tests names we use, also called test modules.
A layer can add its own tests in <meta-layer>/lib/oeqa/runtime, provided it extends BBPATH as normal in its layer.conf (test module names shouldn't collide though with those in core).

Test modules are found in meta/lib/oeqa/runtime and they can use code from meta/lib/oeqa/utils which are helper classes for extra stuff (like starting an http server)

You should start by copying an existing module, e.g syslog.py or gcc.py are good examples, and go from there.

You'll see that all test classes inherit oeRuntimeTest (found in meta/lib/oetest.py). This base class offers some helper attributes. Here's a short list:

Class methods:

hasPackage(pkg): returns True if pkg is in the installed package list of the image (based on WORKDIR/installed_pkgs.txt which is generated at do.rootfs)

hasFeature(feature): returns True if feature is in IMAGE_FEATURES or DISTRO_FEATURES

d: the bitbake data store ( so you can do stuff like oeRuntimeTest.tc.d.getVar("VIRTUAL-RUNTIME_init_manager"))

testslist and testsrequired: used internally, tests shouldn't need them

filesdir: absolute path to meta/lib/oeqa/runtime/files (which contains helper files for tests meant for copying on the target, like small .c files to get compiled)

qemu: accces to the QemuRunner object, the class that boots the image. Useful attributes:

ip: the machine's IP

host_ip: host IP, only used by smart tests

other stuff not relevant for tests

target: SSHControl object, used for running commands on the image

host: same as qemu.ip, used internally, not really used in tests

timeout: global timeout for commands ran on the target for this instance (default: 300).

run(cmd, timeout=None): The single most used method. Basically a wrapper for: 'ssh root@host "cmd"'. It returns a tuple: (status, output) which are what their names says: the return code of 'cmd' and whatever output that produces. The optional timeout argument represents the number of seconds it should wait for 'cmd' to return (if None the default instance's timeout is used which is 300 now, if 0 it runs forever or until the command returns).

Here's a breakdown of what happens when this module is loaded by the python unittest loader:

setUpModule: - although this is optional, it's found in almost all modules and allows for checking of certain feature/packages in an image (it's also the way TEST_SUITES = "auto" works, which loads all tests but skips them based on this)

The actual test class has two class methods: setUpClass and tearDownClass which are run before all, respectively at the end of the test methods. These are called test fixtures and are used for setting up tests (like copying files on the target in this case). Exceptions thrown in setUpModule/setUpClass and setUp methods lead to marking the test as an ERROR not FAIL.

the test methods themselves just call some commands on the target and assert the return code of those. Assert execeptions lead to FAILs.

There are two test classes here, each with their methods making use of more of the attributes from oeRuntimeTest.

This also makes use of unittest's skip decorators and our own decorator skipUnlessPassed which uses test methods names for skipping - basically some kind of dependencies between them.
skipUnlessPassed is misleading and there is gotcha here: it only works when for ordered tests (that's why the order in TEST_SUITES is important and the order/name of the test methods).
Why? Because of the way unittest counts passed tests. A passed test is one which isn't skipped, failed or error and this becomes a problem when the respective test method hasn't run yet (so trying to depend on some test that gets run after your module won't work as expected). That is there is almost no distinction between a test which has passed and one which hasn't run yet. (see python's unittest sources in result.py)

One more thing: be inventive with the shell commands you run and make them so you can rely on them and have one good return code for success. Sometimes you do need to parse output, see df.py and date.py for examples.