Wondering how other people are handling this, because with the teams I work with we are runnning into issues when tracking when our defects are being worked on.

What we do now is fill out an estimated completion date field which is mandatory to fill out when a defect is switched to an open status. This date is used by admin to track how old defects are and whether or not they are being worked on. People ussually have a tough time keeping this updated and as a defect coordinator I end up pushing these dates out all the time.

What we are running into is that people will leave a defect in a new status while they are investigating it in order to not have anyone track their progress because they aren't actually fixing anything at the moment.

Why would people not want anyone to track their progress when they are investigating before fixing? Are they penalized/rewarded according to those statistics?
–
user246Aug 8 '12 at 15:49

they aren't really penalized or rewarded, but they can be bothered based off of those statistics. So they know if they leave it in a new status that they will have a longer time to investigate the issue before someone starts asking about it
–
SalmonerdAug 8 '12 at 17:05

4 Answers
4

We use a "check-in date" (estimated completion date) after the bug is understood and we start working on the fix to estimate when the work will be done. But we use the actual date the bug is opened until it is resolved to help us track progress against bugs and work items.

In burn down charts, or glideslope graphs we track the number of active issues (bugs or work items) and the number of active issues open > 72 hours based on the actual open dates.

Essentially we are looking at trends. E.g. if the # of active issues >72 hrs is equal to the # of active issues then we can predict issues aren't being fixed fast enough and the # open issues will usually spike.

Since leads triage issues in their feature area every day we can re-assign/re-prioritize work to make sure important things are addressed and the burn down happens in a timely manner without burning out people.

The way nearly every team I have ever been a part of has handled this is that bugs are triaged regularly by some subset of the product team - usually dev lead, QA lead and PM lead. The triage team is responsible for deciding whether the bug meets the bar to be fixed and assigning it to the appropriate team member for fixing. During triage a bug can be assigned to a a few different traige states:
1. Accepted
2. Rejected
3. Investigate

It should be put into the Investigate state if more information is needed in order to determine whether it should be accepted or rejected. If it is accepted, that should mean that the triage team already has a fairly good idea how big of a work item it is and the availability of the team member it is assigned to.

Any time spent investigating and/or fixing and/or unit testing a bug that has already been accepted is considered time spent fixing the bug. There is no need to break it down beyond that for any purpose that I can think of. Even tracking exactly how long it takes to fix a bug doesn't feel like very actionable data. The only time I can think that it would come into play is if the triage team thought a bug was a quick fix and it ends up being much more complex and then it's just a matter of talking with the developer (hopefully in daily standups) to understand how long it should take to fix. Still, this would be on a more case by case basis for managing a release and less useful when thought of in terms of trending data used to make future business decisions.

Why do you need to know how old defects are and whether they are being worked on? Is this to make sure that you're on track for a release? It seems like a much more direct approach would be to have a discussion with each developer about what they are working on and what they expect to finish prior to the next milestone. This is a great value you get out of daily standups.

I go to daily standups for the team I am on and then another that involves all teams (Around 12). I really don't know why they are focusing on the age of defects right now, it seems to make more sense to focus on defects that are holding up a release.
–
SalmonerdAug 8 '12 at 22:02

"they aren't really penalized or rewarded, but they can be bothered based off of those statistics."

They are indeed being penalized - being "bothered" is a penalty. When people are measured and penalized like this, they quickly learn to game the system.

Consider assigning the defects to folks and find a way to understand what they are working on without being considered a "bother". Sometimes talking with people, rather than relying on a tracking system, works better.

If you are tracking time to fix it should be applied based on priority. P1 hours, P2 days, P3 weeks etc. As the defects get older they can be reevaluated. But I have found that fixing defects ends these discussions. If its taking too long it usually means features are prioritized over technical debt in the backlog. The amount taken in each sprint can be increased to close out defects.

As already stated QA, dev and PM collectivly set the priority.Product Mangers to if you are affecting feature delivers.