Akonadi Testrunner

Igor's GSoC project, found in kdepimlibs/akonadi/tests/testrunner. The Akonadi Testrunner sets up an isolated Akonadi server based on an environment configuration file.

Until version 4.11 this meant starting a separate D-Bus daemon (and KDE infrastructure as needed, such as kdeinit). Since version 4.12 this uses the Akonadi Multi-Instance feature. The latter provides less strict isolation, but considerably improved performance and convenience when interacting with the test.

Creating Testrunner Environments

A testrunner environment consists of two components: a set of configuration and data files and a XML description file of the environment.

Here is an example listing based on the environment used for the libakonadi unittests:

<config><!-- path to KDE configuration ($KDEHOME) --><kdehome>kdehome</kdehome><!-- path to Akonadi configuration, ie. the stuff that usually goes into ~/.config/akonadi/ --><confighome>xdgconfig</confighome><!-- path to Akonadi data, ie. the stuff that usually goes into ~/.local/share/akonadi/ --><datahome>xdgdata</datahome><!-- load resources of the specified types --><agentsynchronize="true">akonadi_knut_resource</agent><!-- set environment variables --><envvarname="AKONADI_DISABLE_AGENT_AUTOSTART">true</envvar></config>

The first three elements define the relevant paths inside the environment data, relative to the config.xml file.
The <agent> element can be used to create instances of the specified agent (multiple such elements are allowed). If the agent is a resource, it can also be synced initially by adding the synchronize="true" attribute. Tests will not be launched before the syncing has been complete in this case.

Agents set up in this way can be configured by simply providing the corresponding configuration file in $KDEHOME, such as akonadi_knut_resource_0rc in our example.

Global configuration files can be provided in the same way, akonadi-firstrunrc as shown below is in particular useful to avoid the Akonadi default setup mechanism interfering with the test:

[ProcessedDefaults]defaultaddressbook=donedefaultcalendar=done

Same for kdedrc which allows to disable kbuilsycoca4. That can greatly speed up tests.

[General]CheckSycoca=falseCheckFileStamps=false

The <envvar> element allows you to set arbitrary environment variables inside the test environment. One useful example is AKONADI_DISABLE_AGENT_AUTOSTART which will prevent the Akonadi server from starting autostart agents, which can further speed up the setup process.

Using the Testrunner

Interactive Use

For manual usage, the testrunner provides an interactive mode in which it sets up the environment and provides a way to "switch" into it.

First, start the testrunner:

$ akonaditest -c config.xml --testenv/path/to/testenvironment.sh &

Note: Although the testenv parameter is not required, it makes life a bit easier when testing manual. If you don't pass it the script will be generated in a temporary dir and is therefore a bit harder to find.

Once the setup is complete, it creates a shell script containing the necessary environment variable changes to switch into the test environment:

$ source/path/to/testenvironment.sh

The environment variables of the current shell are then changed to point to the test environment (eg. KDEHOME, DBUS_*, etc.). Every Akonadi application run in that shell operates on the Akonadi server of the test environment.

To terminate and cleanup the test environment, run:

$ shutdown-testenvironment

Note that your shell afterwards still points to the (now no longer existing) test environment and might not work as expected anymore.

Non-Interactive Use

kdepimlibs/akonadi/tests uses the Akonadi Testrunner to run unittests in an isolated environment. For automated usage, the testrunner can be used in a non-interactive way:

$ akonaditest -c config.xml <comand><params>

The testrunner will run command params within the isolated environment and terminate afterwards.

This can be used from within CMake (example based on kdepimlibs/akonadi/tests):

Using QtTest unittests with KDE extensions (QTEST_KDEMAIN) together with the testrunner is problematic as they modify some of the environment variables
set by the testrunner. Instead, use the following:

#include <qtest_akonadi.h>
QTEST_AKONADIMAIN( MyTest, NoGUI )

KNUT Test Data Resource

In kdepim/akonadi/resources, fully featured resource that operates on a single XML file. File format is decribed in knut.xsd and follows closely the internal structure of Akonadi. New files can be created in eg. Akonadiconsole by creating a resource and specifying a non-existing file.

Akonadi Benchmarker

In kdepimlibs/akonadi/test, part of Robert's thesis.
It is a set of test that show the time to process many item/collection operations.

This section needs improvements: Please help us to

cleanup confusing sections and
fix sections which contain a todo

Unittests

Akonadi Server

Usable without installation, run with ctest/make test as usual.

kdepimlbs/akonadi

These tests use the Akonadi Testrunner, the test environment is found in kdepimlibs/akonadi/tests/unittestenv.

Setup

The tests do not yet completely work without having certain components installed, namely:

Akonadi Server

KNUT resource

Running the tests

The tests can be run automatically using ctest/make test as usual. To run a single test manually, it needs to be executed using the Akonadi testrunner:

kdepim/akonadi

Are there any?

This section needs improvements: Please help us to

cleanup confusing sections and
fix sections which contain a todo

Resource Testing

Tools to automatically test Akonadi resources are currently in development in playground/pim/akonaditest/resourcetester. There are two basic modes of operation, read tests and write tests. The resourcetester tool provides convenience methods for common operations needed to perform those tests.

Read Tests

To verify the read code in a resource works correctly we need to read pre-defined test data from the resource and compare that with independently provided reference data.

Write Tests

Once the reading code is verified we can use that to verify the writing code. This is done by writing a change to the resource, re-creating it to ensure the change was persistent and finally comparing the re-read change with the expected result.