No Estimates Middle Ground

byPawel BrodzinskionJuly 23, 2013

The idea of no estimates (or #NoEstimates) is all hot these days. People would choose different parties and fight a hell of fight just to prove their arguments are valid, they are right and the other party got it all wrong. I’d occasionally get into the crossfire by leaving a general comment on a thread on estimation in general, i.e. not steering a discussion for or against #NoEstimates.

And that’s totally not my intention. I mean, who likes to get into crossfire?

What No Estimates Mean

A major problem of no estimates is that everyone seems to have their own freaking idea what that is. Seriously. If you follow the discussions on the subject you will find pretty much anything you want. There are crusaders willing to ban all the estimates forever as they clearly are the source of all evil in the software world. There also are folks who find it a useful tool to track or monitor health of projects throughout their lifecycles. You definitely can find people who bring to the table statistical methods that are supposed to substitute more commonly used approaches to estimation.

And, of course, anything in between.

So which kind of #NoEstimates you support, or diss for that matter? Because there are many of them, it seems.

Once we know this I have another question: what is your context? You know, it is sort of important whether you work on multimillion dollar worth endeavor, an MVP for a startup or an increment of established application.

My wild-ass guess is this: if every party getting involved in #NoEstimates discussions answered above questions they’d easily find that they’re talking about different things. Less drama. More value. Less cluttered twitter stream (yeah, I’m just being selfish here).

Is this post supposed to be a rant against discussion on no estimates?

No, not really. One thing is that, despite all the drama, I believe that the discussion is valuable and helps to pull our industry forward. In fact, I see the value in the act of discussing as I don’t expect absolute answers.

Another thing is that I think there is #NoEstimates middle ground, which seems to be cozy and nice place. At least for me.

Why Estimating Sucks

Let me start with a confession: I hate estimating. Whoa, that’s quite a confession, isn’t it? I guess it is easily true for more than 90% of population. Anyway, as long as I can get out avoiding estimation I’d totally go for that.

I have good reasons. In vast majority of cases estimates I’ve seen were so crappy that a drunken monkey could have come up with something on par or only slightly worse. And last time I checked we were paying drunken monkeys way less than we do developers and project managers. Oh, and it was in peanuts, not dollars.

It’s not only that. Given that kind of quality of the estimates the time spent on them was basically waste, right?

It’s even worse. It was common when these estimates were used against the team. “You promised that it will be ready by the end of the month. It isn’t. It’s your fault.” Do I sense a blame game? Oh, well…

And don’t even get me started with all the cases when a team was under pressure to give “better” estimates as the original ones weren’t good enough.

Why We Estimate Then

At the same time, working closely with clients for years I perfectly understand why they need estimates. In the case of a fixed-price contract we have to come up with the price somehow. That’s where estimates come handy, don’t they? There also is a million dollar question: so how much will I spend on this thingamajig? I guess sometimes it is a million dollar question literally…

So as much as I would prefer not to estimate at all I don’t hide in a hole and pretend that I’m not there when I’m asked for an estimate.

All Sorts of Estimates

Another story is how I approach the estimation process when I do it.

I would always use a range. Most of the time pretty broad one, e.g. the worst case scenario may mean twice the cost / time than the best case scenario. And that’s still only an estimate meaning that odds are that we would end beyond the range.

Whenever appropriate I’d use historical data to come up with an estimate. In fact, I would even use historical data from a different setup, e.g. different team, different project. Yes, I am aware that it may be tricky. Tricky as in “it may bite you in the butt pretty badly.” Anyway, if, basing on our judgment, team setup and feature sizing is roughly similar I would use the data. This approach requires much of understanding the dynamics of different teams and can be difficult to scale up. In my case though, it seems to work pretty fine.

I’m a huge fan of Troy Magennis and his work. By the way, despite the fact that Troy goes under #NoEstimates banner, he couldn’t possibly be farther from folks advising just to build the stuff with no estimation whatsoever. One of most valuable lessons we can get from Troy is how to use simulations to improve the quality of estimates, especially in a case where little data is available.

Finally, I’m also fine with good old guesstimation. I would use it on a rather general level and wouldn’t invest much time into it. Nevertheless, it works for me as a nice calibration mechanism. If the historical data or a simulation shows something very different than an expert guess we are likely missing something.

Interestingly enough, with such an approach having more details in specifications doesn’t really help, but that’s another story.

On the top of that, whenever it is relevant, I would track how we’re doing against initial estimates. This way I get early warnings whenever we’re going out of track. I guess this is where you think “who, on planet Earth, wouldn’t do that?” The trick is that you need to have quite a few things in place to be able to do this in a meaningful way.

A continuous flow of work gives us a steady outcome of delivered features. An end-to-end value stream means that what is done is really done. At the same time without continuous delivery and a fully operational staging environment end-to-end value stream is simply wishful thinking. Limiting work in progress helps to improve lead time, shortens feedback loops and helps to build up pace early on. And of course good set of engineering practices allows us to build the whole thing feature by feature without breaking it.

Quite a lot of stuff just to make tracking progress sensible, isn’t it? Luckily they help with other stuff too.

Nevertheless, I still hate estimation.

And I’m lucky enough to be able to avoid it pretty frequently. It’s not a rare case when we have incremental funding and budgets so the only thing we need is keeping our pace rather steady. And I’m not talking here about particularly small projects only. Another context where estimation is not that important is when money burn-out rate is so slow (relatively) that we can afford learning what the real pace is instead of investing a significant effort into estimating what it might have been.

No Estimates Middle Ground

To summarize the whole post I guess my message is rather straightforward. There’s value in different approaches to estimation so instead of barking one at another we might as well learn how others approach this complex subject. For some reasons it works for them pretty well. If we understand their context, even if ours is different, we might be able to adapt and adopt these methods to improve our estimation process.

That’s why I think the discussion is valuable. However, in terms of learning and improving our estimation toolbox #NoEstimates notion doesn’t seem to be very helpful. I guess I’ll stay aside in the middle ground for the time being.

By the way, if we are able to improve our cooperation with the clients on the estimation I couldn’t care less whether we call it no estimates or something different.

Thoughtful and honest post, Pawel – excellent read and I’m not surprised you’re confused/annoyed with the signal-to-noise ratio on Twitter about #NoEstimates. It’s more a function of Twitter being an exceptionally poor medium for building shared understanding of any problem among diverse participants, let alone one as wicked as alternatives to estimation in software projects.

I dislike estimating as much as you – and I’ve noticed that it’s not just the average software team doing a sub-$1M project who’s getting it wrong: It’s much larger teams with much larger budgets and significantly more grave consequences.

I see the problem of software project estimation as an extension of seeing software development as analogous to manufacturing or construction, ie. two camps: “Designers” who draft plans that are given to “Builders” to develop. Jack Reeves observed in 1992 that this is a bit flawed as machines are actually doing the construction, not humans – a position that links with Brooks Jr.’s observation that in systems design projects, men and months aren’t interchangeable commodities.

This poses a significant problem for estimating software projects: We’re trying to speculate on how an entirely design-based activity among participants, using volatile raw materials that can and do change frequently while in play, will work out – at the time when we know the least about what we’re creating or even what would be really valuable to our customers.

We could guess/estimate and get it right, but this raises a more fundamental question: Is our purpose for estimating to land on a date and budget we speculated when we (and our customers) knew least about the system we were going to develop? What can we learn from that measure of success?

In talking with #NoEstimates practitioners, I’ve come to understand they favour working within hard constraints, eg. a fixed budget or delivery date that is derived from a real business need vs. one that is “arbitrarily” arrived at through speculation/forecasting. This gives their teams boundaries within which they can arrive at “lean” solutions that are deployed to production (not just staging) every day.

In this way, they’ve found that they’ve been able to engage with their customers on a more fundamental level as they see their investment turned into working software every day instead of every other week. In turn, this fundamentally changes their relationship with them: It’s very high-trust.

Estimates, or rather the demand for them, is an indication that we might have bigger issues upstream. Everyone wants to know the future or at least a probabilistic range of what their system will cost. However, as recent research has found[1], this can obscure a probability of not just being a bit wrong, but disastrously wrong.

This said, I’ve yet to encounter a //practitioner// of a #NoEstimates approach to software delivery who would advocate not giving estimates where a customer or manager demands it. If you’re in a situation where you can’t avoid it, you do it. No one wants to see you fired or lose work just to stand on some principle.

However, if you have the opportunity or latitude to try small experiments then you can learn a great deal about whether you even have the ability to work estimate-free in your domain. For example, you can learn a lot about increasing your predictability just by working to keep your user stories small and deployable in 1-2 days and by holding brief, daily retrospectives to inspect and adapt your own practices.

Certainly, no //practitioner// I have spoken with (and they all give their time quite freely to share their learning with others via Skype chats) has told me that they just up dumped estimates one day. It was the result of very small, incremental experiments over a long period of time to understand how their teams worked and could be improved. They made some discoveries – your mileage will vary.

In a way, it’s kind of what the agile manifesto and many agile frameworks and processes and many learned folks wiser than I am have said for some time. ;-)

@Chris – Thanks for the thoughtful and elaborate comment. What I read between lines is a suggestion that many of the disputants with the most extreme views are not practitioners. Is that right? I mean it’s not that difficult to find people telling us that estimates are pure evil.

There is only one thing I don’t think I agree with. I don’t consider a request for an estimate as an indicator of a bigger issue upstream. I agree that the better we understand the motivations of a person asking for estimates the better we can address them not necessarily giving them an estimate.

However, one of the most frequent cases is just someone who pays (is going to pay) trying to figure out a rough cost of making their idea happen. I may be willing to pay for something x dollars but not twice as much. At the same time I don’t want to get something for x dollars – if my idea doesn’t fit the budget I prefer not to start at all. I don’t see that as an issue.

Of course, there are other motivations too. Some of them may signal an issue upstream indeed. I’m not sure this is so common as some people imply it is.

Correct: If you want to grok how to work without estimates, the best sources are the people who are doing it for themselves. Some have seized upon estimates in much the same way documentation was seized upon after the agile manifesto was released: It’s touched the same nerve with a lot of people questioning the value of an activity with low perceived value.

To pick up on my last point above, there’s nothing radical about how certain practitioners work without estimates – much of it is continuing the spirit of the four values and twelve principles of the manifesto (in particular, the first and third principles). In my “interview” with Neil Killick, when I asked how a team would start taking tentative steps toward working without estimates, he gave me the following advice:

“__Don’t simply stop estimating.__ Try and get better at creating simple, unambiguous slices of functionality. Measure your throughput. Compare story count data with your story point data. Discover for yourselves if a #NoEstimates approach is right for you and a good fit for your organisational culture.”

Re: Estimates and Upstream Issues – Agree to disagree. The example you provide of a customer wanting to know if “x” can be done within “y” can be useful if the problem domain is understood well enough that you can make a reliable assessment, eg. “I have a budget of $10,000. I would like a SharePoint 2013 CMS with some custom lists, stylesheets and layouts and a starter taxonomy to organize my content.” We can safely rule this out as the licensing costs alone will wipe out most of the budget.

Where we get into trouble is when we’re asked to speculate about a solution within a complex domain (ie. all of custom software development) and our errors plus or minus could be significant.

The practitioners I’ve spoken with are incredibly risk-averse: They don’t like taking a flier on a big idea because there’s a very good chance it will go horribly sideways. Each has told me that working within a hard constraint is much better for them to drive value, eg. “We have a trade show in two months, we need to update our product with new features.” We don’t know what features will be in the final product, but we do know we’ve only got two months – let’s deliver value iteratively and see what will give the biggest bang for the dollar.

There’s a lot more to it than this, of course. Really it comes down to the advice Neil gives above – heck: You don’t even have to go all-in; you can get enormous benefits by just clarifying your user stories, and slicing them so they can be delivered to production within a day. That’s a laudable goal in and of itself – the rest, if you want it, will follow.

@Chris – I guess, you don’t need to convince me. Small slices of the scope, continuous delivery, measured throughput is our bread and butter (and something that allows us not to estimate pretty frequently).

However, this happens once we already are up and running. There is some funding to explore what we can deliver and we collaborate with the client to find out how to maximize the value.

The problematic situation typically happens earlier – I know pretty few clients willing to say: “Let’s spend 10k and see what we can get for that.” It means that we don’t yet get to flow, continuous delivery of value and small slices because there isn’t the go decision.

Personally I’d advise clients to cut down their MVPs to really minimal scope (and then some as it still isn’t minimal) but it’s not the way everyone want to attack their products and I’m OK with that and don’t treat that as an issue.