If I have a test that fails due to a known bug, but that bug is deemed lower priority to fix than other work, what should be done with the test?

I can think of 2 options:

Leave the test in the test set. However now every time the test set is run the whole set will fail, which may mask other newly introduced bugs.

Remove the test/mark the test as an expected failure. The ticket for the bug would then reference the test so that when the bug is fixed the test will also be enabled. The concern with this approach would be that the bug could be forever ignored and QA is not driving development.

"However now every time the test set is run the whole set will fail, which may mask other newly introduced bugs." Why does it mask other bugs?
–
Joe StrazzereApr 11 '12 at 9:52

@JoeStrazzere because people will get used to the test suite failing and so may not bother to check the results as thoroughly as they should and will miss one additional failure in the suite.
–
danioApr 13 '12 at 11:37

1

It might make more sense to attack the "may not bother to check" part of the problem, rather than artificially removing a real failure.
–
Joe StrazzereApr 16 '12 at 17:56

@JoeStrazzere I'm with danio on this. You're just swimming against the tide by trying change people's behaviour. Better to just accept it and change your system to compensate.
–
Mal RossApr 24 '12 at 11:36

5 Answers
5

The method that's used where I work uses that tri-modal result: pass, warn, or fail. We typically configure cases where a low priority bug is affecting the tests to warn, so that until the bug is corrected (possibly several releases on) that set of tests will report a warning, but not impact the rest of the run (so that any other changes will be identified).

It takes a bit of work and a highly modular, data-driven framework to be able to do this, but in my experience it's well worth the effort. Right now there are something like fifty such runs of different test sets, some of them reporting warnings for known bugs while others run clean. Every single one of them picks up new bugs within 24 hours of introduction unless a bug breaks something to the level where automation can't continue and isn't fixed for some time (for obvious reasons, anything that completely breaks automation is an emergency).

Having your tests modular enough that you can configure a case to report a warning and continue without causing downstream effects is the way I'd choose to avoid this dichotomy altogether.

We use a system to mark known failures in a database with a link to our bug tracking system. Once the bug is marked complete in the bug tracking system, the test is automatically turned back on. We use this even for high-priority failures so that the report is clean for later, unrelated submissions.

The problem with "best practice" is that your work is always an exception.

Other answers have suggested methods of turning tests on automatically when the code is fixed. That is far better than hoping that somebody turns it on when the code is fixed, but

how do you report the non-running test in this case ?

does the test/bug end up in release notes ?

We are perhaps lucky in that developers and testers work together and there is normally somebody to do an ad hoc investigation.

We add excuses to the test script.

The tests still fail and are in statistics

The excuse is presented at the top level

Analysis (verifying that the failure matches the excuse) is straightforward

Doomed Tests (won't be fixed for 'a long time') are kept running

With an excuse of course

If there are multiple tests with the same failure, we tend to reduce down to an exemplar, or shove the rest to the end of the campaign so that they are skipped.

You will notice from the above that the tests are not fully automatic.

We operate a number of different test campaigns. The sanity and regression campaigns only ever gets tests that are proven to be reliable. That should mean that failures such as described here, never make it into those campaigns...

...so we can still run fully automated tests to prove that the build hasn't been broken.

A low priority fix would mean that the bug is not of a critical nature and will not impact the system in a big way. In such as scenario, out of the 2 options that you have mentioned, 2nd option looks better. Option 1 will be kind of a "false alarm" when you dont have any other issues. The bug tracking system should ensure that the bug does not fall through the cracks and is always visible when working on future releases.

QA driven development is good but QA should find a balance between assertiveness and letting go.