Comments on: Expected Results: How to persuade developers to document themhttp://itknowledgeexchange.techtarget.com/itanswers/expected-results-how-to-persuade-developers-to-document-them/
Tue, 31 Mar 2015 21:33:46 +0000hourly1By: sbarberhttp://itknowledgeexchange.techtarget.com/itanswers/expected-results-how-to-persuade-developers-to-document-them/#comment-43038
Thu, 24 May 2007 18:35:05 +0000#comment-43038Documenting expected results while testing rather than during test design is certainly a step in the right direction, in my opinion, but I still think it is flawed in a fundamental way for a majority of tests.

If, for instance, you are testing some kind of computational device and there is only 1 acceptable “result” to a series of inputs, then documenting that result may make sense.

However, most of the time, it doesn’t make sense to design, build and execute a test simply to look for one *exact* response. What we testers *really* have (if we are doing our job well) is not a single expected result, we have a whole range of expectations that often take the form of oracles and heuristics. In that vein, documenting our expectations would be *better* than documenting an “expected result”.

The problem is that our expectations are generally well too numerous to document in a way that anyone would find valuable. In fact, I can think of over 100 expectations I have for what will happen when I finish typing and press the reply button without missing a beat.

The reality is that you can document anything you want, as much as you want, but no test conducted by a human being that has the ability to judge goodness and badness, evaluate likes and dislikes, and make comparisons to their own personal experiences can be independently verified any further than to say “this test was conducted on this date, at this time, but this individual on this machine under these conditions.”

Which is why, rather than walking the fine line between grossly oversimplifying and patently lying via documentation, I recommend simply recording and filing the test execution. It’s faster, more accurate, significantly harder to fake and all it costs is some storage space. Especially with the number of times I have seen and heard about development projects where all those binders filled with well documented test cases, expected and actual results were never actually tested after they were written.

Think about it, what SOX auditor would actually prefer to conduct all of the tests themselves by reading a document as opposed to (more or less) watching a DVD movie at double speed to SEE all of the testing that was conducted?

If I read into your approach, “intentional blindness” as an argument against documenting expected results, sounds like an invitation to turn testing into a “fishing expedition.” Given a tester motivated to break the software, maybe that’s an acceptable contrarian view. After all, is there anything more demotivational than compliance to SOX?

Or is your approach to simply document the expected results after the test is executed? That would still achieve the standard of “independently verifiable”, while avoiding the risk of inattentiveness. Given that the developers are not passing the software over to testers, but rather testing it themselves, this may be viable. But then again, attentiveness is all about motivation. Whether I’m inattentive in designing a test or looking at the results, the same risks would seem to apply.