My manager asked me to write a estimation of work hours and a risk estimation of source code changes for a defined task.

While the first is no problem for me and there are many resources on the web, I cannot get my head around the latter one.

I already asked for a clearer description of risk estimation and got the answer that the risk for the need of "follow-up code changes due to the changes made" and "potential loss of stability of the overall software" should be stated.

How can I approach this task (thumb rules, documents about risk estimation, ...)?

Some sources says on average there are 15-50 defects per 1000 lines of code, so if you can estimate how much you're about to add/change, that might be something. Not useful (risk estimates rarely is), but at least it's some kind of number that might keep them of your back.
–
Martin WickmanJul 5 '11 at 12:36

3 Answers
3

The two risk areas amount to "do you know everything you need to know?" and "will you break anything else?"

You should probably give a qualitative analysis of the risks.

List the questions you have -- the mysterious areas -- the things that might require follow-up or that you might break because you don't understand them.

The "numbers" are 1.0. You will have follow-up changes (there are always follow-up changes) and you will introduce new previously unknown bugs (unless you have a really good testing discipline, which sounds unlikely from the situation and the question).

Ideally, you understand the whole problem and won't need follow up. Is this really true? What evidence can you give that you understand everything? If you have evidence, present it and claim that there's no risk of follow-up. If you don't have evidence that you understand the whole problem, list the things you don't understand. That's the risk.

Ideally, you won't break anything else. Is this really true? What evidence can you give that your change is isolated and you won't break anything else? Again, list the things that might break; that's the risk.

Note that perfect knowledge can only come after you've actually completed all the changes. Only after you make the code change, you'll know whether or not you knew everything and didn't break anything else.

Looking into the unknowable future, you can only guess if you know everything and won't break anything.

Consequently, there's a level of diminishing returns. You can provide some evidence that you understand the change and won't break anything, but you can't be absolutely sure of your analysis except by actually making the actual code change.

While I feel "Risk Estimation" is mostly a bad metric, it is even worse if given to a developer, especially the very developer who will be doing work. Your manager might as well ask you to single-handedly do your own acceptance testing and your own code review as well.

If there is a right way to do risk analysis for a task or feature I would do so by a few measures:

How many independent components or function points does the feature touch?

Do the affected features or components have clear and complete design documentation or technical specifications?

Are there affected features or components covered by automated unit tests?

What other measures of technical debt are strangling the system that could be problematic?

Perhaps if you can numerically account for these kinds of things and formulate them in some kind of BS forumla to determine a ratio of risk, your manager will probably be happy with that. Formulas like the one I described, while mostly meaningless, are like Managerial Porn.