A website dedicated to oneself has been described as the greatest act of hubris. Welcome aboard.

Primary menu

Post navigation

Acceptance testing a pyglet application

I’ve been trying to create a simple acceptance test for a pyglet application. A thorough suite of acceptance tests, verifying the correctness of all the shapes drawn to the screen by OpenGL, sounds like far more work than I want to do. But a couple of simple acceptance tests would be valuable, to check out basic things: that the application runs; opens a fullscreen window; responds correctly to some basic inputs and command-line options; and has an acceptable framerate. Especially if I could quickly run this basic smoke test on multiple operating systems.

I tried writing an acceptance test which ran the application-under-test on a new thread. This didn’t work for me *at all* (perhaps because pyglet is not intended to be used with multiple threads), so for the time being I’d given up, and was proceeding without acceptance-level tests.

A couple of days ago I had the idea of a test that didn’t involve threading. Instead, it takes a list of test conditions (as lambdas), and uses pyglet’s own clock and scheduler to request a callback to a test function – try_condition() – on every frame:

When try_condition() gets called by pyglet on the first frame, it evaluates the value of self.condition. If True, then that first part of the test has passed, and it gets the next condition off the list. If False, then this function simply returns, to let the application continue running. When try_condition() is called again on the next frame, it resumes where it left off, testing out the same condition again. After it has been trying the same condition for a long enough time, it deems that condition to have failed, and raises an assertion error.

Here is the rest of the class, which sets up the scheduled calls to try_condition().

So self.conditions is a list of lambdas that will be provided by the acceptance test. Calling self.next_condition() merely plucks the next condition off the list, into self.condition. If there are no more conditions left, then the test has entirely passed and it requests the application to terminate, by setting the pyglet member window.has_exit to True.

In future, I should probably allow the acceptance test code to specify a different timeout value for each condition.

Anyhow, we can then write an acceptance test by inheriting from this AcceptanceTest class, providing an appropriate list of conditions, and then just calling the application’s main() function. This function won’t return until the application exits, either when one of the conditions times out and raises an assertion error, or else when all conditions have passed and the test framework sets window.has_exit.

The conditions don’t simply have to be expressions to verify the application state. They could stimulate the application by raising mouse or keyboard events, etc, and then simply return True so that the test harness would move right on to the next condition.

Early days with this idea, but it seems to work, and thus far I’m very happy with it.

The basic idea is that:
1) Your test suite will schedule “run_next_test” in the main application scheduler at regular interval.
2) Tests are kept in a circular buffer
3) Tests can fail, succeed or be postponed (if the environment is still not ripe to perform the test.
4) Test have full access to the window object, so they can test any propriety of the entities or trigger any of their methods.

Mine – like yours – is just a stub. It works for the kind of tests I want to perform right now, but it has a few limitations. Most notably:
– I am not using the standard “unittest” (= my own test grammar)
– Tests are scheduled at regular intervals rather than been triggered by a specific condition.

Both of them – though – could be circumvented with some extra work.

PS: Your blog is the best learning aid I could find on the intertubes for somebody – like me – new to pyglet. Thanks!

Thanks mac. I haven’t refined the idea since then – in the end, I found that even with this sort of framework, adding extra system-level testing to game-like OpenGL applications to be quite difficult (since I was thinking about trying to analyse the appearance of what gets rendered on screen)

So the ideas described here have been invaluable to serve as a smoke-test, to run one or two simple system-level tests as shown, especially to run them on an install (rather then from source), to check that everything still works after being bundled up and then installed on various operating systems.

If you do have any better ideas about what sort of things it makes sense to system-test in an OpenGL application, I’d be all ears. Perhaps I’m having problems because I’m trying to test it at the wrong level of abstraction.