Posted
by
Soulskill
on Tuesday February 09, 2010 @02:11PM
from the step-one-invent-time-travel dept.

itwbennett writes "It can take a fairly stable team of programmers as long as six months to get to a point where they're estimating programming time fairly close to actuals, says Suvro Upadhyaya, a Senior Software Engineer at Oracle. Accurately estimating programming time is a process of defining limitations, he says. The programmers' experience, domain knowledge, and speed vs. quality all come into play, and it is highly dependent upon the culture of the team/organization. Upadhyaya uses Scrum to estimate programming time. How do you do it?"

Estimating accurately isn't so much of an art of estimating accurately, as it is being able to figure what to chop that still gets the product in on some semblance of being on time and in a way that people like it. The deal is, you have to be able to get a screen or database up and running ok in some x number of hours, and prioritize, to fit that estimate. Once you start doing that, then you can adjust your estimates to allow for more features or yes.

Are we estimating a tweak to an existing feature? Or the creation of an architecture for a high-volume real time production system?

That matters. Methods that are cost-effective for one are worthless for the other.

And there are other concerns...like....how much are we allowed to spend on the estimate itself? That will put limits on the accuracy and precision (the difference between which is critical to understand in any methodology), as well as further determine what kind of estimation methodology can be

Very true. And, for a business oriented website, it may seem foolish to blow 40 - 80 hours on getting the look of a page right. But, if you are making a more media oriented site, its probably nowhere near enough.

Estimating programming time is often estimating how long it will take to do something that has never been done before.

And if it has been done before, it's out the door to India already, which I'm sure someone else will also point out.

It seems to me that a lot of project estimation is done to serve a hidden purpose.

Yes, and the hidden agendas are formal inputs to the schedule estimation problem. Been there, done that. One of the major terms in the non-linear politics is who gets the blame when a product shipped with working functionality proves impossible to extend in the next coding iteration because the wrong foundation was chosen. Do you want the estimate consistent with my professionalism, or with

The deal is, you have to be able to get a screen or database up and running ok in some x number of hours, and prioritize, to fit that estimate. Once you start doing that, then you can adjust your estimates to allow for more features or yes.

Unfortunately, features usually get cut by chance rather than by plan since no one wants to give up on anything, including the deadline. As you suggest, it would be be better to do it in a conscious way, but that seems way too rare in practice.

It's the Project/Program Manager's job to manage scope expectations in such a way that features do indeed get cut by plan rather than chance. Having said that, very, very few PMs I've worked with are actually good at this aspect of their jobs. Most of them prefer to avoid conflict and maintain a sort of comfortable fiction with the client until things reach a point where that fiction is simply unmaintainable.

If you can get people to actually prioritize features, then you have a chance in hell of cutting features by plan. The common first response to such a request is to rate all features as pri 1, to which the formula response is "so you mean I can just implement in any order i like?". If business is at all reasonable, they can be brought around to at least grouping features into required, really want, nice to have, and maybe, but the first challenge there is talking to them at all. This isn't really something

Exactly. Ship date is a feature. It will have lower priority than some features, and higher priority than some other features.I've never seen a team that could estimate, months in advance, when a particular feature set would ship.I've been part of great teams that regularly review progress and have the power to adjust priorities.

Exactly. Ship date is a feature. It will have lower priority than some features, and higher priority than some other features.
I've never seen a team that could estimate, months in advance, when a particular feature set would ship.
I've been part of great teams that regularly review progress and have the power to adjust priorities.

Shipdate was the feature dropped in Duke Nukem Forever.. So it ships with all features..

The trouble is one of the biggest impact items - ie. the customers or vendors - will almost never be the same project to project. Other peoples experiences may differ but I always find that the people involved usually are the biggest factor determining if a project meets deadlines or not and you have no idea what they will be like until the project begins. Even the same person may perform differently project to project, on the first they may be a dedicated resource, on the next the project is one of a doz

1) Make a guess, very generous one. Make sure there's plenty of space.2) Double it, as you will need an equal amount of time for testing and bugfixing when you're done writing.3) Double it again, as Murphy will make sure everything will fail, which will lead to inevitable delays.4) Multiple by PI

Ah, good ol' function point analysis. I thought most intelligent people realized that it was a whole load of bullshit right around 1983 or so.

Doesn't mean it's not useful -- just not useful for estimating actual time. You create an estimate, you've got data to back it up. Sure it's just as much a WAG as anything, but it's such a Scientific WAG, what with all the numbers and calculations. Gets management off your back until you miss enough deadlines, checkpoints, tollgates, or whatever they want to call them.

I take the amount of time I think it will take, double it and move it up a time unit.

So, if I think it will take two days, I estimate 4 weeks. If I think it will take a week, I estimate two months and so on.

Exactly my method too. I assume we both stole it from the same place, but I can't remember where. I am also ashamed that it is often right.

Which either means I'm lousy at estimating or brilliant. Or maybe just lousy at implementation. Or maybe nothing at all.

But it doesn't matter, of course, since the boss will ask "Are you sure?" And we all know that that means "try to come up with an estimate that fits my predetermined schedule." So you either cut features/quality by plan or by fact. Usually by fact, since no one wants to give up anything. So you give up whatever doesn't happen to be done (or give up your release date -- which is also a feature of a sort).

Exactly my method too. I assume we both stole it from the same place, but I can't remember where.

fortune, aptly, sprung this on me last night:

The Briggs - Chase Law of Program Development:
To determine how long it will take to write and debug a
program, take your best estimate, multiply that by two, add
one, and convert to the next higher units.

The method I was taught in school, back in the days when Arthur Andersen existed, and prior to Andersen Consulting or AssVenture (or whatever they're called now), the method that was taught to me was credited to them, and looked like this:

Estimate how long it will take, if everything goes right. Call it T1.Estimate the expected time it will take, knowing how some things usually go wrong. Call it T2.Estimate the worst possible case, if everything goes wrong, and there are lots of unforeseen issues - you are 99% sure you can get it done within this amount of time. Call it T3.

The Weighted average estimate is (T1+(4*T2) + T3)/6

This yields some very odd T values, but often works out to be a reasonable estimate. The developers may tell me "If all goes well, it'll take 2 days. I expect it to be 4 days. But worst case, it might be 40 days." And you end up with a 9.6 day estimate. (which is SIGNIFICANTLY different than the "I expect it to be 4 days".)

----

The part I learned from working for a major computer manufacturer was this:

Take whatever estimate the developers tells you, and tell the client to expect it in double that time (from today). If the lab tells you they'll have a fix by tomorrow, tell the client it'll be there the next day. If the lab tells you it'll be ready in three months, tell the customer 6 months. That way, as time goes by, the estimates become more real, and the variation from the estimate becomes more refined. And it accounts for the unavoidable time between when the lab releases the work, and when it gets installed at the customer's site.

If, in January, I was told by the developers that it'd be ready in 2 months, I will tell the customer to expect it in 4 months (May). If in February, I haven't gotten a revised time-frame, then we still expect it in four months (June).

Manage expectations, and don't let the customer down (particularly when things are outside my control).

For me it really depends on how accurately they spec it out. If it is a general idea I can be an order of magnitude off easily. If it is a very accurate spec and needs little change throughout then I tend to be pretty accurate.

I don't work with detailed specs, but I do work with estimating a lot of small tasks (on the order of days at most) instead of estimating one big thing. I think that's sort of the Scrumm-my angle they're talking about in TFS, though.

If the Scope of work document is inaccurate then the estimate on time will be inaccurate.

Give me a real Scope of work, I'll give you a real estimate of time. If someone would clamp battery terminals to the Executives nipples and not stop shocking them until they get it through their heads, we all would not have to guess.

I'll throw together a script that takes in all of the project parameters as input, then based on the number of people in the team, past performance, scheduling deadlines and intermediate deliverables, comes up with an estimate for final delivery.

Then I throw out that number completely, and just multiply the time it took me to develop that script by five. This method has proven disturbingly accurate.

We have a name for this - PIDOOMA, which stands for Pulled It Directly Out Of My Ass. If the numbers are particularly rough, we will sometimes call it unwashed PIDOOMA to emphasize that we have not analyzed the numbers at all.

In all seriousness, a friend of mine who's been in the business for a while goes with a refinement of W.A.G.:1. Get a WAG from the developer.2. Apply this formula: Real estimate = WAG * (actual time for previous features / WAG for previous features)3. Tell the developer that his original WAG is what we're using, so he actually hits something pretty close to the real estimate.

As a result, management has a pretty good (although not completely perfect) idea of how long something is going to take, based on deve

But the folks who used the table in the lunchroom complained, so we now use the far more sophisticated system of tea leaf reading. This upsets nobody but the tea drinkers as we frequently need to user their cups before they're done, but then tea drinkers are wussies anyway.

My favorite estimating story... A friend of mine asked how you estimate software, since writing software is a potentially unbounded task. He said, "It's like trying to estimate how long it will take to find my keys. I have no idea until I find them!" I actually found that to be a great analogy. The first time you lose your keys, you could imagine estimating based on how big the search area is, how cluttered the search area is, etc. But you're go

Typically, the solution is much more malleable than time or budget. Determine the baseline requirements. Then determine the resources available (time and money). The project will likely fill the latter two. Success should be measured against adherence to the requirements, not time or money saved.

I do know what not to do. When someone estimates a programming job in number of beers, I get worried. If someone says that a security patch to an existing application is a one beer job, then I hope for the best, and that regression testing and QA catch anything new that cropped in.

I really don't know why this is as accurate as it always turns out to be, but it really works.

I look at the specs of what needs to get done, and I do a quick back of the envelope estimates as to how long each feature will take (e.g., feature A: 5 days, feature B: 2 weeks, etc). Then I multiply each estimate by 3 and add it all up. If some sort of hardware interfacing is required, I multiply the estimate by 8.

My supervisor for my Masters degree taught me this trick, when I was programming a sensor control sy

Scrum is a way of chunking development into well-defined portions. The idea of using Scrum to estimate time just doesn't make sense. Everything in Scrum takes the same amount of time. Two weeks. (Or one week, or whatever your sprint length is.) The difference is that long projects are implemented over multiple sprints, since obviously, not everything can be done in two weeks. So the estimate is not of how long it will take, but how many backlog items will be required in order to reach some known endpoint. Once the backlogs have been created and agreed upon by the team, estimating the necessary time becomes a matter of multiplication: 12 backlogs * 2 weeks = 24 weeks to finish this product.

This makes you shift your thinking from "how long will it take to do all these things" to "how can I break this product development into chunks which each fit into a two-week period?" That's much easier than making wild-ass guesses about the time it takes to do something.

In reality, by the end of the project you are creating new backlogs as fast/faster than fulfilling them. There's always code that needs to be redone, newly found bugs to fix, performance testing that needs to be done, etc. You can't "create and agree" on performance, debugging etc backlogs ahead of time.

I make the initial best-guess estimates based on past projects and past developer performance. I track the initial estimate, and then I track all effort spent as it is logged. I.e. each checkin gets an "effort spent" number. I then track "actual vs. estimate" and come up with a total amount of overrun so far. I take that overrun, get a percentage (e.g. "over by 15%") and then add that back to the total estimate.

So, if the total estimate is 100 man hours, and we are currently over by 15%, I then say it will actually take us 115 hours total to finish the project.

This is based on the sage wisdom of Mythical Man Month: if you first estimate is off, so are all your estimates, usually by the same amount. As depressing as that might initially sound, it's actually accurate and it gives you a great tool for getting a real estimate once the project is underway.

So I mark my first estimates as "estimates" and then I consider the adjusted estimate once we are 2-4 weeks in to be more accurate. It has usually put us one to two weeks within the actual delivery date--which based on my experience with software development over the past 15 years is really good estimating:-) The norm on the projects I was a developer on was that overrun was closer to 90-100%. My last project I managed was 25% with new developers--I considered that a victory:-)

I would advise reading "The Mythical Man Month" by Frederick P. Brooks. It is considered "The Bible" of the human elements of Software Engineering by many. By "human elements" I mean to include your request for information on estimating project time.

Here is a quote from page 20 of the book based on one portion of a software engineering project:

"In examining conventionally scheduled projects, I have found that few allowed one-half of the projected schedule for testing, but that most did indeed spend half of

Do a best-case estimate of the time required, assuming everything goes perfectly and that there are no surprises. Next, double your estimate. And finally, switch to the next highest unit of time measure.

A quarter-hour change will take half of a day. 2 hours becomes 4 days. 3 days becomes 6 weeks. A 6-month project will take 12 quarters, or 4 years.

It's eerie how often that rule of thumb seems to accurately depict the actual calendar time required -- eerily enough that when a so-called "realistic" e

For simple projects I can usually draw upon past experience. With complex projects, if the "client" was good about thinking through what they really are asking for, estimating the time needed is normally still pretty straightforward. But most of the time the client hasn't truly thought about any specifics regarding what they think they want, and will almost certainly bring up very involved new features mid-project - so in those cases I make a wild guess and then round up.

Good for you - if your boss: a) doesn't hold you to that estimate (e.g. requires you to kill yourself in overtime to meet it, even after the project's requirements have changed completely) and b) lets you come up with the estimate in the first place, rather than rejecting every estimate you supply until you come up with the "right" one.

There are situations where I can take a wild guess but questioner must understand that a) it's a wild guess b) it only works in perfect world and if something goes wrong, and especially if that something is something I have no power over with, then you can safely double my bet. These situations are like if I'm asked to write some web based app for people who really don't know what they want. (Web based apps are for some reason really hard for me)

I almost always bill per hour. Most of the time my clients give me an email or verbal idea of what they want over the phone. I try to get as many questions answered as I can without wasting time.

I then take the task and break it down into as many bite-sized subtasks as I can. This does a couple things:

1. It shows where I have unanswered questions
2. It's easier to estimate "add 3 new roles to management interface" and "add cart admin role to admin interface" than it is to estimate "make admin area more secure".

Once that's complete (for this round), I put all those line items into a spreadsheet. I then estimate the number of hours it takes in a reasonable best, and worst, case scenario. So "add cart admin rold to admin interface" might be 3-4.5 hours.

I then add all those up, and add about 33% for planning, and 33% for testing/deployment. Sometimes it can be more or less depending on my experience with the client. I then give that spreadsheet to the client and say, I'm pretty confident your price will be within this range. The areas that have high variability are the areas we need to work on nailing down further. I will bill you actual time taken, but I'll let you know if the range is nearing the top number so we can re-evaluate. That almost never happens, or I should say that when it does, it's almost always because the client has asked for more work, and they understand this.

Customers almost always appreciate this approach and find it helpful. Most of the time I only do this for the first project for a client, and then they just ask me to give them a verbal idea of how hard the project will be since they trust me.

For the very occasional fixed bid work I do, I just take the high number, pad it a bit, and double my hourly rate. I tell customers ahead of time though that they are better off hiring me hourly in almost every circumstance. I usually don't spend a lot of time on fixed bid work because, frankly, I usually don't want it.

And probably more importantly than estimating accurately, we aim to estimate consistently. Then we compare actual rate of feature completion against the estimated size of remaining features. We've landed within a single sprint of estimates over a year long release with 20+ developers.

I would recommend reading "Software Estimation: Demystifying the Black Art" [http://www.stevemcconnell.com/est.htm]. When estimates are created, there are many tasks besides "programming" that need to be done that are totally forgot about in the estimates and thus throws things off from the very beginning. We have to admit from the beginning that it is an estimate and is hinged with certain unknowns. If the unknowns are cleared up, we can be more accurate with our quoting (this is why requirements gathering should be done with careful attention). Also, since the estimates are just that, they need to not just be a number, but more of a range (if you have to give a number, choose the far end and be sure that you are confident that it can be accomplished by then - with a minimum of 90% certainty - and give the confidence with the estimate). One thing that I have learned is that I never negotiate on estimates/price, I only negotiate on functionality. If a manager/client wants it quicker/cheaper/less hours, fine, but I'm not going to change the number unless the functionality changes or more unknowns are cleared up (helping me to quote more accurately).

Take a look at Tom DeMarco's Controlling Software Projects [amazon.com]. He deals with the issues behind estimating (including that one of the reasons we're so bad at estimating is that we get so little practice - much of what we call "estimating" is actually deadline negotiating). He ends up suggesting a separate measuring and estimating team - probably out of bounds except for fairly large companies, but the book has some good insights.

I've spent a lot of time working with many different teams and trying lots of different estimation techniques, and I've had the best success with the ones that let the team work together to come up with an estimate they all believe in. My best results came with Wideband Delphi [stellman-greene.com], which I've been able to use in both Agile and non-Agile projects. I've actually got a chapter on estimation in one of my books -- you can download the PDF of it [stellman-greene.com].

Also, Mike Cohn has a lot to say about planning Agile projects on his bl [mountaingoatsoftware.com]

We break the project into use cases.Estimate each use case.Identify the risk of each use case (High = New stuff that may not work or is hard to predict, Low = Straight forward coding to implement).

Divide the work into time blocks (3 to 5 weeks, I liked 1 month increments).Each month, measure actual progress against plan.

Another thing I do for my resources is to maintain an ongoing metric of whether they over or underestimate and apply it to their estimates. So eager girl who says she can do it it 50 hours but took 75, gets a +50% to her ongoing average. Meanwhile, cautious lad who estimates 80 and took 60, gets a -25% put in with his average.

I usually have a meeting with the stakeholders AND the developers to firmly establish scope and when scope changes, we renegotiate the deadline.

By putting the high risk items early (just do a proof of concept that Xserve 3.85 really does work under Unix 3.71 over a VPN connection before you commit 180 million dollars of dead end work to the project).

While I do not normally overwork my resources, if one of them bids 30 days from now to deliver, then if they have to work extra to make that date, then so be it.

I am not a software guy, but an analog microwave design guy. All too often management expectations can in no way jive with reality. Instead it is often necessary to intentionally underestimate times, and later find specific items to pin the "delays" on. Along the way so much money and resources get involved, that effectively you've gotten the project through the second trimester without management realizing they are pregnant. Beatings ensue, but at that point those of us with scarce skills can't be laid off, so we shield our faces and live on to design another day.