One of the managers at my company recently suggested that we should add a field in jira to track the number of times a bug gets reopened. I thought that it wasn't a very useful thing to keep track of but I had a hard time articulating why. So, I have a couple questions...

What statistics do you track?
How do you decide which to track and which to ignore?
Would you track the number of times bugs get reopened? why or why not?

5 Answers
5

As well as the usual "what they said" to joshin and user246, I'd add this: there's no guarantee that the number of times a bug gets reopened is going to give you any useful information. Here's a few reasons why:

There are several different bugs involved, but they all have the same symptoms. Here, someone could have reopened the bug thinking it was the same problem, but it was a different problem with the same symptoms. You have no way of knowing whether the reactivation of the bug report is one of these or not.

Someone creates a new bug report for the same problem, and it isn't detected as a duplicate. This is particularly common with large, complex systems where the manifestation of a problem can differ depending on the user's configuration, or where everyone reports problems to the same system.

The bug report is activated in order to back-integrate the fix into another version of the application.

Basically, in anything that's got enough complexity to generate... interesting bugs, human differences will make bug report tracking an exercise in false alarms. Probably the most specific you're likely to ever get is which areas of the system generate the most problems, and that's one you don't need a bug tracker for - every tester who's working with a system quickly gets a feel for the trouble areas.

In addition, as joshin pointed out, there's the risk of perverse incentives. Companies where developers are penalized for introducing bugs often develop a completely informal system of bug reporting so that testers don't feel they're harming the programmers by reporting problems. A bug that gets reopened a lot may not be one that's incorrectly fixed or where the problem is misunderstood. It may be an intermittent problem where round 1 involves adding logging to try to trace the problem when it occurs, round 2 adds more logging based on the information derived from round 1, round 3 includes a possible fix but since the problem can't be reproduced by the team, there's still more logging in case the fix isn't right, and so forth.

It depends on what you consider useful. If you start tracking bugs with some metrics (number of times reopened, number of bugs per developer, etc), you might inadvertently start creating incentives for particular behaviours. If you start tracking bugs per developer, you might find dramatic changes the number and severity of bugs reported, for example. Tracking the number of times a bug needs to be re-opened might cause more attention to be given to bugs that are re-opened often. This could be a good goal, or not, depending on your circumstance.

So the answer I'd give is to track measurements that (you think) are relevant to your project. I'm doubtful there are overall metrics that are helpful in every situation, but there are likely metrics that are often helpful.

Also remember that every minute you spend deciding which metrics are useful and which to use is a minute you're not actually finding bugs and improving software. People so easily forget the concept of opportunity cost.

In my little company with three developers and one test person, the only aggregate bug statistic we track is the total number of bugs fixed in the release. Since there are many possible reasons for a large bug count, we do not make decisions based solely on that number.

At my previous job, we also tracked the total number of bugs fixed in the release. We also tracked the number of bugs per developer and per functional area. I used those statistics to pressure a developer who had a reputation for checking in buggy code. In retrospect, I should have been more cautious about how I used those statistics. Bug tracking statistics are notoriously sloppy, and using those statistics in the wrong way can damage someone's career unfairly.

It is interesting to track reopens because they indicate a misunderstanding. You must spend additional effort to determine where the misunderstand resides and why there is a misunderstanding.

In regards to the portion of the question "How do you decide what to track and what to ignore?" I am in the middle process of reevaluating our metrics right now and my approach is to ask the stakeholders what is valuable to them. Ask the developers, ask the project managers, ask the upper management, ask the end users/customers, ask yourself and other members of the test team. Different metrics are valuable to different audiences.

Metrics can fall into several categories:

Metrics measuring the status of testing

% of tests executed

% test plan complete

Metrics measuring the effectiveness of testing

Availability of test environment

Defect age

Number of defects found after release vs number found before release

Metrics measuring quality

% of test coverage

% of tests passed

Number of defects found

Severity of defects

Metrics measuring resources

Several can fall into more than one category. Not that categorizing is important. I just listed more than one category to get you thinking about how different metrics might be viewed by different audiences.

As mentioned by Joshin any metric may " creating incentives for particular behaviours". While using a metric be very clear in to everyone in the team the intended purpose of using the metric. Metric should always be an aid to improve the dev/test process. The metric i found useful were defect trend reports such as number of defects opened, number of defects closed over a period of time.