Wednesday, July 18, 2007

Software Testers, things are a little more ... challenging. For example -

What quality will be the code be when it is delivered to test? Most of the time, we don't know. Vast differences in quality could make testing take a longer - or shorter - length of time.

What is the management expectation for the software when it is released? Most of the time, these expectations aren't quantified - if they are articulated at all.

Once identified, how quickly will the developers fix the bugs? Unless you've had the same team for several projects, and the projects are similar, you probably won't know this. If the answer is "not that fast", then it's probably not a testing phase - it's a fixing phase - and it's not bound by the testers.

For every one hundred bugs the developers fix, how many new bugs will be injected? I've seen teams where this number is fifty, and I've seen team where this number is much less - but I can't think of a team where this number is zero. Obviously, if this number is bigger, testing will take longer.

I could go on - the point is that we've got a whole lot of variables that are beyond the control of the testers. In fact, it's probably impossible for the testers to even know what those variables are. So how can you provide an estimate?

Offhand, I know of only one career field that has to deal with a similar problem. That is -- your friendly 401(k) investment advisor. I always thought that "advisor" was a bit of a misnomer, because providing advice is the one thing that guy doesn't do. Instead, he carefully explains that he doesn't know where social security will be in fifty years, and that he doesn't know the value of your house, how it will appreciate, and weather you will pay it off early or take home equity loans. He doesn't know what inflation will be, or how you investments will perform.

So he gives you a piece of paper with a big, complex formula, and tells you to figure it out for yourself. After all, he will say "It's your money."

Testing is an investment of time by a business owner. It is senior management's money, yet we couldn't get away with this, now could we?

2 comments:

Thanks for starting this thread Matt ... I believe we can discuss lot on estimation (follow up on my Traps test estimation - lightning talk at STAR East)

>>>>What quality will be the code be when it is delivered to test?

This is an excellent question - often put aside by saying "We know that is difficult to say - let us arrive at some educated guess"

Let me ask you ... What kinds of answers would accept if at all *any* answer is possible? How would you suggest PM or some one who is asking you the estimate, answer this question? Which answer would help you and to what extent?

>> What is the management expectation for the software when it is released?

Again, what answer would help you? and how estimation will benefit from it?

>>> Once identified, how quickly will the developers fix the bugs?

Let us say, PM answers this question by saying "Critical and high pririty bugs in Max 3 days and rest all in within a day".

What would you do now? How will you use this answer to refine your estimates?

>>> the point is that we've got a whole lot of variables that are beyond the control of the testers.

Great point - I think this is Key message of the post.

Let me play a devil's advocate here and say - a typical PM would say " yes, I know there are constraints - who is asking you to give 100% correct estimate - what is your educated guess ?"

Whenever I spoke about challenges in estimation (model of testing, Project, Process and people constraints, dependencies and variables beyond the control of test ....) I was instantly shot down with arguements on the lines of above --

I was being *discouraged* from digging further about challenges in estimation with the argument - "Mr Expert, we know (pretend to know) that there are challenges - what is your solution? - Instead of focussing on problem - focus on solution"

My point is when you don't know the problem well enough - how can you formulate a solution even if it is a SWAG or (sophisticated) educated guess?

>> Offhand, I know of only one career field that has to deal with a similar problem. That is -- your friendly 401(k) investment advisor.

Analogy of Testing and 401(k) planning is an interesting one ...

Common to both is "future uncertainity" and "stake"

How would a 401 (k) advise go about doing software test estimation?

>>>So he gives you a piece of paper with a big, complex formula, and tells you to figure it out for yourself.

I think this is what is expected out of us when we talk about challenges with estimation. 401 (k) planner has a sheet - where is ours?

I could go on - the point is that we've got a whole lot of variables that are beyond the control of the testers.

And that makes predicting the future even harder than normal. :)

When a development managers asks how long it will take to test something, I'm often tempted to respond with something like: "It depends on how many bugs your team codes into it."

Yet, I keep going back to Paul Coombs' rules of estimation. There are 12 of them, but the first three set the stage for the rest:

#1) Your estimate will be wrong.#2) You can always provide an estimate.#3) Every estimate must have a contingency allowance.

Estimation is difficult because we are being asked to predict the future. This is why I believe we should not freeze estimates or test plans. I think it was Esther Derby that once suggested breaking work into tasks that take less than two days each so that we'll never be more than 2 days behind schedule. :)