Testing, test management and generic techy stuff

Main menu

Monthly Archives: July 2014

New testers tend to be preoccupied with the motions of the test. They’ve studied methods for identifying boundaries, and know the importance of negative tests. If they have been diligent, they even know to test error handling and recovery. Still, the bright but inexperienced tester often stops a step short of actually knowing whether the test passed or failed.

Let’s look at an example: Testing that an element can be saved to a database. You prepare the element, save it, and the application displays a happy message saying that the element was saved. Done, test passed! Right?

The experienced tester, of course, would not think of stopping there. All you have tested so far is that you get feedback when saving to the database. And you haven’t even tested that it’s the correct feedback. If the save happened to fail behind the scenes, you’d actually have a much more serious issue – the dreaded silent fail. And, of course, you haven’t tested what you said you would test: That the element was saved to the database.

For every test you perform or design, whether manual or automated, the most important question you can ask yourself is: “Does this really prove to me that the application did what I am trying to test if it does?”

For the database example, there are multiple ways to complete the test. You can simply reopen the saved element, or you can continue by using the element in a new operation that needs to read and use the element to work. Or you can inspect the element directly in the database. What you choose to do will depend on the application itself – for example, if it caches elements, reopening the element may not be proof enough. It’s up to you to know what is proof enough.