Unit testing complex objects using OzCode

A good unit test examines a specific scenario using the required minimal input and then verifies that the system has reached a specific state.

This could prove quite a challenge when the unit of work requires complex input, or if the resulting state is difficult to isolate from the rest of the system.

When faced with a complex problem domain, developers tend to over simplify the scenario tested. The problem is that over simplification causes unit tests to lack any meaning while making them increasingly fragile, breaking with any trivial change. From there on it’s a slippery slope until eventually the developer stops writing unit tests due to frustration with the state of these simple yet unmaintainable tests.

Another solution is to write acceptance/scenario/integration tests that execute a big chunk of the system. While those tests are needed in every project, they should be used sparsely and only for main requirements/execution paths. The reason is that they usually run longer and are easily effected by external factors – a scenario test can fail even when the code works perfectly. Relaying heavily on those kinds of tests leads to long running sessions, and when these tests fail it’s difficult to quickly find the root cause of that failure. Instead of quickly fixing the problem, the developer must fire up his debugger and try to find the bug.

A developer trying to create an existing domain object may find out that many domain objects are not created by code but constructed from external services, run time variables and/or data repositories, and re-creating such objects requires an effort. I’ve seen some teams create serialization solutions in order to save and load the right instance for each test, and in fact many times unit testing efforts are postponed (sometimes indefinitely) until such a tool is created.

Handling complex input

Writing unit tests for this application is hard as there are many factors that affect every single pixel color: camera angle, object(s) placements, material types, direction and color of lights – you name it.

When facing this amount of data it’s hard not to be intimidated and choose to render the whole scene. But running the entire application would take a long time and testing correctness would be harder still.

Another problem with testing the whole scene is knowing whether the test has passed. One option is to compare the result with a previously rendered image. The problem with this option is that it could lead to a test failure for every single pixel change, and we won’t be able to pinpoint the reason that the test failed (image A is not exactly the same as Image B is just not good enough and would require quite a lot of work to determine if a bug actually exists and what caused it).

Ideally we’d want to be able to check several key pixels on that image to make sure that they were rendered successfully.

The logic I’m interested in resides in the TraceRay method which takes a Ray object and a Scene and calculates the color of a single pixel:

In the real world I might not even have the ability to create the Scene object by hand since its being calculated from external inputs.

Lastly even if creating the Scene is easy, it would require some trial and error to get all of the inputs right, and finding a specific place in the code would prove challenging.

How export saved my day

First let’s test that a specific pixel got a specific color. For this example I want to test the value of pixel 100,230. It’s interesting since it is affected by the floor reflection, sphere surface and the various lights.

The first order of business is to decide which scenario to test, and debug until we reach that specific scenario. Using Conditional Breakpoints or a simple temporary code change I get the debugger to the following line for the desired pixel (100,230).

Note: In this case using conditional breakpoints to stop at the right location case can take a while since they use interrupts to break on every loop iteration
– consider changing the code temporarily using a simple “if” statement.

Once we have the debugger where we want it all we need to do is open the watch window for the instance we want and choose Export.

The export windows will show:

From here we can set several parameters:

Output format –> in this case C#

Depth -> for Ray we can use the default (3)

When satisfied with the results we can either Copy To Clipboard or Save To File to be used later.

The second instance needed for the test is the Scene, which in this case was created from an outside service which we do not want to run each time we run our unit test.

Exporting the Scene object is done similarly, the only difference being that the depth needs to be increased to 5 in order to capture all of the information.

Now we can stop the debugger and fill our unit test with the two new objects.

After a few tweaks, as well as adding the expected result, we have the following unit test:

Using the same method explained above I’ve created two external JSON files. In the test I use Json.NET to read those files and serialize them into classes before running the test. I’ve added the files as an Embedded Resource to make sure that they are always available for the test, and wrote a simple method to load and deserialize the JSON files.

Conclusion

Exporting instances can be useful in many scenarios. In this article we saw how we can use OzCodein order to Export an instance for a specific later use – in this case in unit tests.

Using debug information to create unit tests can be useful when refactoring legacy code, as a developer can write automated tests for code which he has little knowledge of, just by debugging and exporting objects required to run a specific scenario. These tests enable the safely refactoring and reconstructing of the code, without fear and without the lengthy analysis that is usually required in order to understand the code well enough to create intelligent inputs for automatic tests.