Thursday, November 29, 2012

Back in August, I wrote about the X.Org Integration Test suite, in short XIT. Since then I've been quite busy adding tests to it, expanding it and fixing the odd bug. Jasper St. Pierre has been writing tests against it for the new pointer barrier features, making that feature probably the first to get full, repeatable testing before merging.

Aside from general cleanup so that tests are even easier to write, one of the features I've pushed today is a bug registry. One of the issues with the test suite (especially one that is not integrated with the repository directly) is that tests don't just fail or succeed, they can also be known to fail, or known to succeed.

For example, a test may succeed on git master, but fail on the 1.13 branch - either because the fix has not yet been backported, or because the fix will not be backported anyway.

Keeping track of those failures is quite a task, especially when you have multiple server versions to worry about.

This is what the bug-registry is supposed to address. It's in its early stages, but has already been quite helpful. At it's very base is the commandline client xit-bug-registry, and the matching data file (xml) that keeps track of the various test cases.

This is a simple introduction on how to use it.

To get started, run the tests, then create a new registry from the test output

There are other bits one can add, but bugs and commits are likely the
interesting bits. Right now, that's all it does, but in the future I hope to
expand the script to query the bug database for the bug status, and query
repositories to check if the fix is on the branch yet.

That's all nice, but the real important bit is to verify that after fixing a
bug, one hasn't broken anything.

This simply shows us which tests had which result. Cases with code ++ succeeded
when they were expected to, -- failed when they were expected to. XX is what
you need to look for, it indicates that the test outcome differs to the status in the registry.
In this case, two tests now succeed when before they didn't - usually a good outcome.

The information is all in the registry file and though I admit the interface
is still clumsy, it's quite simple to keep a set of registries (e.g. upstream,
Fedora, RHEL, etc.) and thus a known set of test outcomes for each.

Finally, as test cases are being added, it's important to update the registry.
This is as simple as: