Wondering if anybody else is using this way to report test results.
Some of the projects in the company I work for report test results on a cumulative basis, meaning that a test that passed once is reported as passed if it fails later (a bug is opened anyway), sometimes this is taken to extreme when the the time for full regression (or the team's bandwidth is too low) is longer than the time between releases tests are done on several version and reported only at the end, this way there are cases where a test passed for version X, failed for version X+1 but reported as Passed anyway.

The justification for this method is simple, the reports are used only as way to estimate build quality for intermediate builds and a final test phase is done before a build is released to customers, or at least should be done.

Could you retitle this so that it reads more as a question? Also, this seems to be less of a question and more of a gathering of "Anyone else run into this? What's your experience?"
–
TristaanOgreMay 16 '11 at 18:18

3 Answers
3

So if I understand your question correctly, you are reporting a test as passed even if it fails later, because you are going to do another test phase.

I think that I need to share some tough love. This method is wrong, and could be downright dangerous to someone's career.

The reason why is that you simply don't know what state any particular build is in. Marking a test as "not run" is one thing, because it says "I don't know".

Marking a test as "Passed" when you have executed it and it has "Failed" simply makes your testing invalid. My reason is that you can't trust your results. You will have the same, infact maybe more confidence in a "not run" aka "I don't know" than the lie of "It's actually failed, but I will mark it as passed."

To highlight this, how would you answer the question "Can we ship NOW?" if you had to release a version, without the final test phase because of a critical security fix that was required.

So I'm sorry and this is probably what you don't want to hear but I don't just think that this method is different, I think it is actually incorrect. Sorry.

Don't get me wrong, I know this is wrong and I avoid it when possible, and raise a red flag is reality (i.e. bugs) contradict it. But some of the tools (in-house test management tool) just works this way. I think that originally it was designed to allow failed runs until a test is stable, but not it is being abused most of the time.
–
RsfMay 15 '11 at 12:32

2

Agreed with Bruce. Have to ask - what is the point of intermediate builds? You have no kind of reliable feedback about what's broken in an intermediate build, so all the work of creating that intermediate build is wasted as you don't get to fix what's broken.
–
testerab♦May 15 '11 at 13:11

You are not entirely correct, obviously we don't have clear knowledge of the total quality but bugs are caught and fixed and those intermediate builds are not a total waste of time. Another part of the justification is that SW is developed across the world, by many teams that not always communicate with each other and sometime at different points in time or on different platforms. Usually an intermediate build is aimed to test a single module, with hope that there is someone making sure a final systems tests is done.
–
RsfMay 16 '11 at 10:42

That kind of is how I felt reading that, Joe. There's no visibility and reliability of the reporting of the next version if tests that passed previous versions are already marked as passed in the next version.
–
TristaanOgreMay 16 '11 at 18:20

I've had to do this in a past company so I know where you are coming from, it took a bit of a mindset for me to come to terms with this but the way we ended up looking at it is each build had its own test. Cumulative Results don't work, defects that are caught in earlier builds, and are fixed in later ones are not going to be representative of your final results. You will always need to come back and think about the new build as fresh and untested, even if it looks like you could get away with bringing some of the earlier statuses forward - you will just get into trouble and it won't be representative.

We always said that the current results are a snapshot of the current build, regardless of how the earlier builds fared. What I did with the results, and was somewhat useful, was to get a history of pass/fail on each test so I knew what to focus on later.