Retesting is a key part of the test management process. However, there is a right and a wrong way to approach this. You run some testcases as part of one cycle. Some fail and you look to run them again as part of the next cycle. When they pass the temptation is to go back to the original testcase record and change the result to a pass. This is especially true where there was just a small fix or config change required to resolve the issue. This is the wrong way to approach this.

It’s imperative as part of the test management cycle to make sure each execution is tracked and recorded independently. The scenario is this.

1. You execute against a specific build and configuration.
2. You record a result of fail.

Note that at this point the failed result is logged against this specific build and configuration.

3. You get the development team to fix the bug.
4. You rerun the test and record a pass.

At point 3 you have a fix. You have a change in the application that resolved an issue. When you rerun at point 4 you are testing something different. This is important! Even if the change to resolve the issue was a minor configuration change this constitutes a different release/config/build. It’s imperative that this difference is tracked. Just changing the previous result to a pass is not the right approach. In this scenario you’re effectivly recording the pass against the wrong release/config/build. The key concept to grasp here is that…

Good test management practice dictates that each time you run the testcase you record the exact release, build/iteration and configuration that you are testing against.

If a developer has made a change, however minor, then any future retests need to be tracked as separate result records. So this presents the question ‘how do we track this in real life?’ The following example uses the test management application QAComplete to demonstrate this principal.

The steps we followed in this video example are…

1. Identify the failed testcases you need to rerun
2. Create a filter based on the ‘last run status’
3. Create a set for the next release
4. Included testcases in the set which are identified by the filter

With this approach we have a set specifically to retest all of your failed testcases from the first cycle of testing. This can be considered good test management practice because we’re always tracking the right results against the right versions, builds and configurations of the software.