Imagine it's 5 p.m., and you've just spent hours trying to diagnose a set of broken feature tests to no avail. You can't seem to replicate the issue manually, and the data coming out of the test runs is unhelpful at best, so you do your best to crawl around in the dark until you find a workaround.

Sound familiar?

This is a scenario that is all too common in webdev testing, and one that can be easily mitigated by integrating screenshots into your testing workflow. To better illustrate the value of screenshots, let's consider it from the perspective of diagnosing a customer-reported issue.

Customer problems are, arguably, significantly more difficult to diagnose than issues within a test suite due to the lack of specific data included. These reports may include reproduction steps on a particular issue, but they can often be overly contextual and point towards areas of an application that have nothing to do with the problem at hand. The best way to diagnose a customer issue is through screenshots at different stages of use. This allows you to look for things that the customer may not notice, such as unobtrusive error messages or discrepancies in the user interface that do not get reflected clearly in the logs. Screenshots are valuable because they accurately convey information about software defects in an unbiased way, and in the context of testing, they provide additional information that might not otherwise be available.

Comparing Broken Tests

Broken Test Case.

Rather than simply talking about the value of screenshots in testing, however, let's take a look at an example broken test suite with and without screencasting. For this example, I'm using the example PHP-Sausage-Selenium repository provided by Sauce Labs, which runs an end-to-end test suite using PHPUnit, Selenium, and Sauce Labs. While the example repository runs tests successfully, I made a few changes to ensure that the suite fails with less-than-helpful error messaging.

PHPUnit Test Results.

As you can see from above, our test suite failed to find an element, but it doesn't provide any additional information, other than it is not there. Without any visual aids, solving this problem could be as simple as stepping through the test suite manually in a browser, or as difficult as dumping raw data and trying to guess-and-check your way to a solution. Now, let's take a look at how the screenshot feature provided by Sauce Labs can significantly speed up diagnosis of this failed test.

Sauce Labs Test Results.

If the above screenshot isn't obvious, take a look at the page content. We know from the test suite that we are attempting to find the "i am not a link" element, but looking at the screenshot, we can see that the only element that comes close to it is "i am a link." By tying our test results up directly with screenshots, diagnosis of this issue is nearly effortless, and allows us to spend more time building features, and less fixing bugs.

Taking It Further

It is worth mentioning that, while the above tests were run on only one browser and operating system, the ability to run a test suite across numerous operating systems and browsers helps diagnose issues on systems that you don't have direct access to. This makes it significantly easier to provide a consistent, screenshot-verified customer experience across browsers. While the above example is pretty simplistic, the implications should be obvious. In a large test suite with complex user flows, identifying and fixing issues can be incredibly difficult. By providing screenshots throughout each test case, Sauce Labs makes diagnosing even the most complicated test cases as straightforward as watching over a user's shoulder while they demonstrate a bug.