When you say in one of your comments "finding it can take 3 days." do you mean finding the phenomenon or investigating the cause? If the former, surely you should have system testers continually, methodically searching for these? If the latter, you know about the bug, I think you should estimate it like any story.
–
Eoin CarrollDec 5 '11 at 16:28

I mean investigating the cause. If bug causes are unknown and need investigation we very often observe we can spend many hours searching for the bug root cause and only a few minutes actually fixing it.
–
PomarioDec 5 '11 at 17:38

4 Answers
4

Velocity is X because velocity measures how much story points (= business value) you delivered in the last sprint. But you have second value available - capacity. Capacity is number of hours / man days you have available for the sprint. So if you now doubled the capacity you can assume that your velocity will increase. It is up to team to judge if doubling the capacity will double expected velocity. Of course not every increase in capacity will lead to increase in velocity. For example adding team member will definitely not have immediate effect of velocity increase (the reverse is usually true for few sprints).

If your capacity change frequently by large amount of hours / man days it will affect average velocity value and its meaning for forecasting will be much worse.

Btw. 50% time spend on bug fixing should raise some questions about the way how you deal with quality of your product = how you create your automated test suite.

Add the time spent on fixing bugs in a story to the time spent developing the story (because actually the total time you spent on developing the story includes fixing the bugs: the story was not acceptable to the product owner in the buggy state). Now that is an indication of your velocity.

unfortunately that cannot be done as the bugs pre-date our implementation of Scrum
–
PomarioDec 5 '11 at 14:21

2

@Pomario Why not just treat these historical bugs then as User Stories? Cleary the time that is being taken to address them significantly impacts time that could be addressed to deliver new features.
–
maple_shaft♦Dec 5 '11 at 14:23

@maple_shaft. Those historical bugs are already our User Stories. Though, we cannot estimate their fix time. Correcting a fault can take 3 minutes (1 line of code) whereas finding it can take 3 days.
–
PomarioDec 5 '11 at 14:26

3

@Pomario Why can't you estimate that in terms of story points, hours, or whatever measurement of time you assign to stories? Your engineers should understand the system. Given the defect report and knowledge about the complexity of the subsystems that could be playing a role in the defect, they should be able to estimate a total time it will take to find the location of the defect, write test cases to confirm the defect, fix the defect, and use the test cases to confirm the removal of the defect. If not, perhaps there needs to be time spent on improving your team's ability to estimate.
–
Thomas Owens♦Dec 5 '11 at 14:35

2

Exactly right, @Graham. But even if they don't know exactly what code is involved, they should have enough information to know what modules (at least at the class, if not method level) are most likely involved and have knowledge about the complexity, coupling, cohesion, and overall design of that part of the system to make an educated estimate as to how long the fix will take.
–
Thomas Owens♦Dec 5 '11 at 16:09

If the discovered defects are not related to a currently active story, the defect prioritization and estimation should be part of sprint planning.

Prior to planning, the team can assign points to defects in the same way they assign points to features, which will allow the product owner to prioritize defects in relation to features. The time required to investigate and resolve defects should be defined as tasks associated with the defect.

It's really crucial that the product owner be made aware of the level of effort required to resolve each defect so he/she has has a clear picture of what is going on. I would hesitate to allow a philosophical debate on the definition of "velocity" get in the way of this.

However, if you're trying to show management that a high defect rate is impacting the rate at which you deliver features, you can always show this via defect counts or defect/feature ratios.

I think this would be better than showing declining velocity without explanation.