September 10, 2019

DDSTOP The Saga Continues

There's been a rash of conjectures about all kinds of bad business, project, and software development (agile and traditional) management ideas of late. Time to update the Don't Do Stupid Things on Purpose (DDSTOP) post.

“There’s nothing more dangerous than an idea – or a person – that’s almost right.”

I worked a program, as VP Program Management, that had a simple goal - remove all the nuclear waste from a weapons production plant and send it to secure storage, clean up all the normal hazards and send them to their assigned disposal sites, uninstall all the infrastructure (heating ventilating, communications, and security) while 5,000 people were still working there and no one died along the way.

The program was called Cold and Dark as a description of the 100's building that had to be removed.

There was an Incident (a politically correct term for an avoidable accident) in which a fire started in an elevator shaft of a nuke building several stories below ground when foam filler was used without reading the directions on its application. The result was a safety stand-down for everyone on the site (5,000 Steel Works), including all us office workers, to get the message about health, safety, and safeguards of the materials and processes on-site.

There was a banner campaign to get the message across. In our building a banner 15 feet high and probably 40 feet long hung in our lobby high-bay.

DON'T DO STUPID THINGS ON PURPOSE (DDSTOP)

I'm reminded of that when I hear suggested processes - in less threatening environments - like those here listed from latest to earliest, mostly from #NoEstimates advocates, but there are others.

This list is a cautionary tale, to remind everyone that where there is advice in the absence of principles, processes, and practices based on those principles - ask a simple question does this advice have any evidence of being credible outside the personal anecdotes of the person providing the advice? No? don't listen. Yes? then ask for the evidence to substantiate the claim.

Extra Ordinary Claims, Require Extraordinary Evidence - Carl Sagan

Let's start from the latest to the earliest post that qualifies for Doing Stupid Things on Purpose

59 - Estimation Proponents Have Not Be Able to Show

Estimation proponents have not been able to show (they have 50+ years of data), how estimation ensures on-time delivery. The Agile community did that in less than 20 years with #ContunuousDelivery and #NoEstimates. The game is up, your estimation emperor had no clothes on!

This is one of those statements that has no basis in principle, let alone practice

Estimating and the estimate the process produces is the raw information needed to manage a project in the presence of uncertainty, to increase its probability of success. The estimate nor the estimating process can assure anything. Only the actions of management and the team increase the probability of success. The quote's author - as he always seems to do - commits a coregory error where he makes a semantic or ontological error in which things belonging to a particular category are presented as if they belong to a different category, or, alternatively, a property is ascribed to a thing that could not possibly have that.

In this example, the estimate and the estimating process can ensure on-time delivery

The estimating process cannot ensure anything. In the presence of project uncertainty, there is NEVER assurance of on-time delivery. There is only a probability of on-time delivery.

The only way on-time delivery can be ensured is if there are:

No uncertainties - reducible or irreducible

No deadlines

No mandatory features or Capabilities

No constraints on resources or facilities

No mandatory quality for produced outcomes

In the absence of this naive world, there is always the risk from uncertainties, constraints from scare resources, changing requirements, uncertain funding, inconsistent processes, and all the other elements that lower the probability of project success.

The author if this quote is trying to convince us that decisions can be made in the presence of uncertainty without estimating. This not only violates the principles of probabilistic decision making but bit also Microeconomics of Software Development and Managerial Finance of spending other people's money.

It's a claim with no basis in fact, principle, or practice.

It also appears the author didn't read the Scrum Guide as well as that Probability and Statistics book he had in High School

58 - Pressure Your Teams to Get Stuff Out the Door and Expect Them to Give You Accurate Estimates

This is one of many examples of intentional bad management, used by #NoEstimates as the basis for Not Estimating.

When management failures - in this case - utterly fail to know how to make credible estimates, the result is usually a disappointment. Conjecturing that Not estimating fixies Bad Management is more than nonsense. It's willful ignorance of good management processes.

To meet deadlines, we need two critical success factors as a start:

What are the uncertainties in meeting that deadline

What are the handling strategies for the risk to the deadline created by the uncertainties

As you know if you've been reading the blog, that uncertainty comes in two forms

Epistemic uncertainty that creates reducible risk, that can be handled with risk buydown activities

Aleatory uncertainty that creates irreducible risk, that can be handled with margin, schedule margin, cost margin, or technical margin.

It seems that those making claims about estimating or Not estimating, don't know anything about estimating.

If you want to learn about estimating, start with the Estimating Compendium at the top of this page.

57 - Stupid #NoEstimates Concepts

1. We estimate a task with a specific implementation in mind. Then we decide to do it differently after starting the work. Upon hitting the deadline we retrospectively assign that to the estimate, even if we did something completely different.

2. We estimate a project. Then proceed to work overtime, weekends and "forget" to include those in the "actuals". Later we say "we were only 20% late" and believe it!

3. Fixed scope is an illusion. Even the interpretation of the written text evolves over time (assuming you try not to change the document). Focus your project on outcomes, not scope. On-time / on scope projects can be failures too!

Here's a set of statements from a No Estimates advocates. Let's look at each one

If we estimate a specific implementation and that changes, we need a new estimate. Why are we waiting till the deadline arrives before updating the estimate? The notion of estimate to complete and estimate at completion is at the core of any closed-loop control system. No credible management system that spends other people's money in the presence of uncertainties operates this way described in this manner.

If your estimates show some credible level of confidence - say 80% - for the labor hours, why are you working overtime? The only answer is your estimate was wrong, you're poor at managing the work, you're not as efficient as needed in your estimate, or several other reducible and irreducible reasons. If the efficacy of the team is too low, then a NEW Estimate to Complete is needed. This is again the fallacy of #NoEstimates, where they ignore the core principles of a Closed Loop Control system

Where's the feedback loop?

Where are the corrective or preventive actions?

How long are you willing to late before you find out you're late? The NO Estimate advocate to these quotes claims you should produce working software every single day (it could be).

So why isn't a new assessment of the Estimate to Complete done on a similar cycle?

Why wait to find out your late (or over budget) until it's too late

Fixed scope is an illusion, but fixed Capabilities are common. That's usually defined in the contract between those paying and those providing. This is called Capabilities Based Planning. In the CBP Paradigm, the scope canbe and many times has to be, flexible, since any non-trivial project is usually developing products or services that are not in place now, and new capabilities are needed.

But someone has to document that change, authorize that change, otherwise when the money and time run out those paying will be surprised that didn't get what they thought they were paying for:

Outcomes ARE the scope - otherwise, WHY are you producing these outcomes if no one wants them?

Yes, on-time, on-budget projects can be failures. This is more common than not. But the root cause of that does NOT come from estimating by definition. Another example of the utter failure to understand the principles of Root Cause Analysis.

Risk Management is How Adults Manage Projects - Tim Lister

Managing in the presence of risk - created by reducible and irreducible uncertainty - requires making estimates, since we're operating in the presence of uncertainty.

For each risk, we need to find the Root Cause, the Condition and/or the Action that creates the risk. Then we can determine how to handle that risk. By buying it down with testing, experiments, redundancy, or other direct actions to reduce the Epistemic Uncertainties that create the risk. Or we can provide margin for the Aleatory Uncertainties that create risk. Both these activities start with making estimates of the attributes of the uncertainty - Probabilistic and Statistical.

Now is this generally applicable to all estimating conditions? NO. Start with the Value at Risk. Got a de minimis condition? Estimates provide little value. Got a risk the success of the firm - better have a robust risk management system, with it's estimating, corrective and preventive action process if you're going to survive to live another day.

56 - Past is the Basis of the Future

You never use actuals to replan. Plans about future, not past. There are no actuals for the future. The estimation problem is simply reset when new actuals come it, leading many to "re-eatimation" Another can of worms

This is one of the those what in Gawd's Green Earth is the Original Poster thinking about? The answer is he's not. Reference Class Forecasting is the core basis of all estimating process. From construction to Agile Software Development. This is one of the statements that can be debunked in under 5 minutes by a middle schooler.

and 14,300 more papers, pages, and books on the topic of Reference Class Forecasting AND Agile Software Development that can be found by a middle-schooler on Google in under 3 minutes.

The OP clearly is clueless about the simple and fundamental principles and process of estimating in the presence of uncertainty.

Go find past projects, see if they are like your project. If so, make adjustments to better match your project. Go look in NEMA, COSMIC, IFPUG databases, just as all professionals managing other people's money. Take those adjustments into consideration and behave appropriately. This is not that hard. This is well known in the domain of Enterprise IT and Software Intensive System of Systems. Perhaps those working in de minimis projects, where is No Deadline, No Not to Exceed Budget, No Mandatory Capabilities for that Time and Budget. The definition of that work is de minimis.

This is one of those blatently obvious fallacies, that can be debunked with ease.

This is one of those fully buzz-word compliant statements by a self-proclaimed agile expert who claims decisions can be made in the presence of uncertain without estimating the consequences of those decisions on the cost, schedule, and technical performance of the project.

Let's deconstruct the nonsense here, one phrase at a time and show how each is a fallacy of estimating, likely based on the lack of training or experience on managing the business processes while spending other people's money

Estimation is not an activity - Yes it is - there's a difference between an estimate as a noun and a verb

Noun - an approximate judgment or calculation, as of the value, amount, time, size, or weight of something. ... a statement of the approximate charge for work to be done, submitted by a person or business firm ready to undertake the work.

Verb - to say what you think an amount or value will be, either by guessing or by using available information to calculate it.

Estimating the cost, duration, or technical performance of a piece of software is a verb. It's an activity performed by people. The estimate as a noun is the result of the activity of estimating

(Estimating) It's a management paradigm - yes it is but is many other things as well

It's a management process. If the OP wants to call that a paradigm, I guess that's OK.

But estimates and the production of those numbers through the estimating processes are part of all credible decision-making process in business and engineering, in the presence of uncertainty

If there is no uncertainty, estimates are not needed

#NoEstimates then means #NoUncertainty

But since there is uncertainty on all project work, estimates are needed to make credible decisions when spending other people's money

Near perfect predictability

This is a perfect example of willfully ignoring what an estimate is

There is no such thing as perfect predictability in the real world when there are uncertainties

All estimates have precision and accuracy. Accuracy and precision are alike only in the fact that they both refer to the quality of measurement, but they are very different indicators of measurement.

Accuracy is the degree of closeness to true value.

Precision is the degree to which an instrument or process will repeat the same value. In other words, accuracy is the degree of veracity while precision is the degree of reproducibility.

Top-down

Estimates can be top-down, bottom-up, and both

Top-Down estimates are used to estimate the total cost of a project by using information from a previous, similar project. This is also called Reference Class Forecasting or Comparison Class Forecasting and is a method of predicting the future by looking at similar past situations and their outcomes. The theory behind reference class forecasting was developed by Daniel Kahneman and Amos Tversky that helped Kahneman in his Nobel Prize-winning work in economics [1]

Bottom-up estimates are typically made by the people doing the work, and they take part in the estimating process. This is a way to approximate an overall value by approximating values for smaller components and using the sum total of these values as the overall value. One disadvantage of bottom-up estimating is the time it takes to complete. While other forms of estimating can use the high-level requirements used to start the project process as a basis, bottom-up estimating requires low-level components. In order to take into consideration each component of the project work, these components must first be identified, through decomposition.

Both Top-Down and Bottom-Up estimates can be combined to meet the needs of those paying for the wok

Command and Control

C&C is bad management

No credible business applies command and control to developing products

C&C works well in the management of nuclear weapons - a domain I work in

The OP uses C&C is a red herring to rally the troops who have experience Dilbert Pointy-Haired bosses, but it has no business being applied in any credible development organizations

And credible estimating processes do not use C&C

Decisions without customer feedback

This is literally doing stupid things on purpose

The OP loves to make this kind of statements with no consideration of the nonsense he's spewing

Planning without doers

Why would you do this

Another example of DSTOP

I'll be very crass here, When you hear nonsense like this from the self-proclaimed expert showing up on time, on budget, with the needed capabilities without estimating anything, please know he doesn't know WTF he's talking about.

While this notion might be applicable to social or political domains, but when your spending other peoples money, that needs to be done inside the governance process. This notion that the coders can say how the business's money gets spent is the basis of the No Estimates advocates. This might be the case when there is nothing at risk - a de minimis project.

But when the Value at Risk is above the de minimis level, the cost to produce value at the needed time for the needed cost is the basis of Managerial Finance of any credible firm that intends to stay in business.

Those No Estimates advocates appear to willfully ignore an immutable principle

There is NO principle of Managerial Finance, Probabilistic Decision-Making, or Microeconomics of Software Development, where in the presence of uncertainty - both reducible and irreducible that creates risk - a credible decision can be made, using scarce resources, while spending other people's money, without Estimating the outcome of that decision and its impact on the probability of success of the project.

The ignorance this principle is the core fallacy of #NoEstimates.

53 - Book Review - No Estimates: How to Measure Project Progress without Estimating

There was a recent review of the NoEstimats book. Ignoring for the moment the utter fallacies in the notion of making decisions in the presence of uncertainty without estimating, here's my response to the question posted by the reviewer of the book

How about some clarification of the concepts you mention

Why are estimates waste?To whom? Those spending - maybe they are a waste? To those paying? You may want to go ask the CFO or the customer if they have a fiduciary needed to know how much it will cost them to receive the value that are paying you for. This is called "Microeconomics" of software development

They don’t add business value.To whom? If you show ups late and over budget the "value" of the work you're producing is reduced. Again a core principle of Microeconomics. Would you buy a pr duct or service without having some knowledge of the cost of that product or service?

A black swan could potentially wipe you out.This is a common Vasco claim. He reads a Macroeconomics book by Taleb. SW development is NOT Macro it's Micro. I sense Vasco doesn't know the difference between the two. In macro, the money is not the same "color" in that in Micro and SW development you can "stop" spending. As well the notion of a Black Swan is an Ontological uncertainty - an "unknowable" uncertainty. That is extremely rare in the software development business.

Time after time estimates have been proven wrong.

Go find the root cause of that and take corrective or corrective actions to stop doing stupid thing on purposeHere are some resources you can start withhttps://herdingcats.typepad.com/my_weblog/2019/05/software-estimating-resoruces-1.html

Then go hire someone who knows what they're doing and has done this before.Unless you're inventing new physics, someone somewhere has knowledge how to estimate the work. Having worked my self in "inventing new physics" in graduate school on a particle accelerator that's what we had post-docs and principle investigators to help us along.

What questions do estimates try to answer?

When will we be done?

What will it cost?

Will we make any money on this spend?

What's our break-even date?

Will our investors ever get their money back?

What should the "price" for this product be, so we can recover our cost at the time we need to recover the cost, so the board of directors wouldn't ask "you guys know what the hell you're doing?"

This is basic Business Management 101, another topic Vasco willfully ignores

Can you predict when a particular project will end?

Yes, there are many ways to do that. Start read how with these resources https://herdingcats.typepad.com/my_weblog/2019/05/software-estimating-resoruces-1.html

Then learn about Agile Function Point Counting and the 6 to 8 tools available for estimating agile project you can by, install and put to work. Some are free, some are expensive. NO credible non-trivial software development org DOESN'T have an estimating tool.For agile start with Troy's book and free tools https://www.amazon.com/gp/product/1466454830/ref=dbs_a_def_rwt_bibl_vppi_i1

Given the rate of progress so far, and the amount of work still left, when will the project end?

This is called Estimate to Complete and easily determined with tools built into Jira, Rally, TFS, and Version one and Troy's free excel spreadsheet.

Given the rate of progress, how much of the work can be finalized by date X?

Again the Estimate to Complete and Estimate at Completion, both standard "engineering" process that Vasco willfully ignores

The answer to this question is important because of: plans, comfort, uncertainty reduction, financial projections, sales proposals.

What to use instead for story estimation?

Function Points. References to those are in the link on estimates

Forecasts based on past data

This is reference class forecasting. Google "reference+class+forecasting+for+agile+development" to find resources

Prioritization based only on the value

Value cannot be determined without knowing the cost to produce that value. This is High School Economics, again which Vasco appears to not have taken

52 - Requirements, as traditionally used, are a separation tool. To separate the "thinkers" from the "doers". In software this is a HUGE mistake!

A system is an integrated combination of any or all of the hardware, software, facilities, personnel, data, and services necessary to perform a designated function with specified results.

Systems engineering is "the recognition and application of scientific, management, engineering, and technical skill used in the performance of system planning, research, and development with an emphasis on the technical management process. This includes the application of the requirements development process, decision analysis methods, technical assessment, configuration management, and interface management."

Systems Engineering

Is a discipline that concentrates on the design and application of the whole system as distinct from its parts. It involves looking at a problem in its entirety, taking into account all the facets and variables, and relating the social to the technical aspect.

Works as an iterative process of top-down synthesis, development, and operation

Is an interdisciplinary approach and means to enable the realization of successful systems

Focuses on defining customer needs and required functionality early in the development cycleConsiders both the business and the technical needs of all customers with the goal of providing a quality product that meets the user needs.

Systems Engineering Tasks include:

Stating the problem.

Investigating alternatives.

Modeling the system.

Integration of the system elements.

Launch the system.

Assess performance.

Reevaluate the results

The Benefits of the Systems Engineering approach:

Ensuring the effective development and delivery of the capability through the implementation of a balanced approach with respect to cost, schedule, performance, and risk by using integrated, disciplined, and consistent activities and processes regardless of the acquisition life cycle

Enabling the development of engineered resilient systems that are trusted, ensured, and easily modified (Agile)

Identifying the most effective and efficient path to deliver a capability, from identifying user needs and concepts through delivery and sustainment

Using event-driven technical reviews and audits to assess program maturity and determine the status of the technical risks associated with cost, schedule, and performance goals

Nowhere in the Systems Engineering paradigm, starting with Requirement elicitation are requirements used to separate the thinkers from the doers. Like the #Noestimates fallacy from the same poster, this too is a fallacy based on uninformed and unsubstantiated opinion.

51 - Cone of Uncertainty

The misunderstanding of the Cone of Uncertainty has come back, with the claim by a #NoEstimates advocate that the Cone of Uncertainty is Fake News and there is published data discrediting the myth.

First, the Cone of Uncertainty is a Principle used to define the needed reduction in the variances of estimates on Programs. It is NOT a post-hoc assessment of project performance, rather it is a guide to state what level of confidence will be needed at what point in the program to increase the Probability of Program Success. The Cone does NOT need data to validate the principle of reducing unc as the program progresses. That is an Immutable Principle of good project management. If your uncertainty is not reducing at some planned rate, you're not managing the project for success and you're going to be late, over budget and the products not likley to work or some combination of those.Asking for data to show the COU is valid is a fallacy, and shows lack of understanding of the principle.

When you hear I have data that shows the cone doesn't reduce ask first what's the root cause of the estimating accuracy for your projects NOT reducing as the project progresses? The person first making that claim actually had data, that he then proceeded to ignore and went on the claim the cone does reduce. If he had said the cone doesn't reduce for ME, then that would be fine. But #NoEstimates advocates picked up this canard and ran with it, claiming estimates can be done, because the uncertainty around those estimates can't be reduced. NEVER once asking why don't they reduce or reading in the IEEE articleto see there were causes for why the cone didn't reduce.

Although I don’t have definitive evidence to explain the variation in estimation accuracy I observed, I’ve identified what I believe are the primary causes:

optimistic assumptions about resource availability,

unanticipated requirements changes brought on by new market information,

Here's are some suggestions for the original claim that the cone doesn't reduce in a letter to the IEEE Computer article

These and other root causes of NOT following the reducing Cone of Uncertainty are common on many projects, from software to construction. But these possible root causes do not remove the Principle that you should manage the projects and it's estimating processes TO reduce the cone of uncertainty, so the probability of success of the project can be increased.

With the need to reduce the Cone of Uncertainty, to seek an increase in the probability of project success, here are the resources you can read - which it appears those conjecturing the Cone of Uncertainty can't be reduced - failed to read. Then you can see where they went wrong and put the Cone of Uncertainty to work for you in making informed decisions in the presence of uncertainty. Here are papers and electronic books from my library that we use on all our software intensive system of system projects, that speak to the principles and practices of the Cone of Uncertainty. Anyone claiming the cone doesn't reduce needs to be asked did you read all of these and understand what the Cone of Uncertainty is about - or are you just making this up?

Time to put an end to the nonsense of No Estimates and the Cone of Uncertainty and the continued willful ignorance of the principles, practices, and process of making decisions in the presence of uncertainty by estimating the impacts of those decisions

50 - Buffers in SW projects are "not" the same as insurance. Insurance protects you against catastrophic loss with a limited investment (premium). Buffers don't protect either buyer nor vendor against catastrophic loss (see Berline airport among others)

This is one of those claims fitting Wolfgang Pauli's criteria

This isn't right, this isn't even wrong

First buffers are the margin that protects the project from aleatory uncertainties - that is naturally occurring variances in time, cost, and technical performance. This type of uncertainty is the dominant variance on all projects. This uncertainty creates a risk that is Irreducible, hence the use of a buffer or margin to protect the deliverable from being late, over budget, or not meeting the technical requirements.

So, in fact, it is insurance in a manner that provides protection. But this protection is in the form of margin it does NOT remove, reduce, or correct the source of the risk. That source - the aleatory uncertainty - is Irreducible.

But insurance does NOT protect you from Catastrophe either. When the catastrophe occurs what insurance does is compensate you for the loss. It compensates you for the loss of your car when it is totaled, by sending you a check to buy another one. It compensates you with a check to replace your household goods when they are stolen. It compensates you with a check to replace your house when it washes away during a flood.

The second issue is that the suggestion that this No Estimates advocate never seems to understand that until you know the conditions and actions that create the undesirable outcome - the Berlin Airport being late - NO suggested fix will be effective. This is a fundamental immutable principle of Root Cause Analysis. As well any middle schooler with Google in 10 minutes can find the root causes of the Berlin Airport delays. And yes, there were bad, very bad, estimates made of the work efforts. But to suggest that NOT estimating would have brought the airport in on time is utter nonsense.

Here's a summary of the root causes for Berlin Brandenburg being late and over budget

Too many stakeholders with different interests

Failure to communicate real project status

Major changes without replanning the project

Insufficient quality control of the work

Too many fundamentals not in place - too many critical areas not properly managed

The airport was doomed to fail on day one.

Our own Denver International Airport is suffering from the same root cause. What was planned to be a security update schedule of 2021, is now 2023?

So we're back to the same thing Doing Stupid Things

49 - It is ridiculous to say "estimates (x) are ok because it is people who misuse them f(x)." Wrong! In the "Real World" how things get used & their impact (f(x)) is much more important than the things themselves (x). Things that are often misused should be dropped" #NoEstimates

Yet another example of failing to understand the role of Root Cause Analysis in improving the probability of project success. The notion that when estimates are misused, the corrective action is to NOT estimate, willfully ignoring the principles of root cause analysis, and the principles of decision making in the presence of uncertainty, i exchange for spending your customer's, money with no clue of how much will be needed, when you'll be done, and what's the probability that when you do run out of time and money, there will be anything of value to deliver.

Things that are often misused need to be corrected in their use. This is like saying I misuse my lawnmower and cut the grass too low and it burns the roots, so to correct that undesirable outcome, I'll do is stop mowing my lawn.

How about the person making this claim about dropping processes that he doesn't know how to use, take a look at how to estimate in the presence of uncertainty and how to conduct a Root Cause Analysis and then read the Resourcematerials here that will guide him in finding the conditions and actions of the problem he's' having with estimating.

So remember this

When you see or experience dysfunction, and you Don't find the Root Cause of that dysfunction, to determine the condition and/or activities that create the dysfunction. And instead, conjecture some action you believe will fix the dysfunction - you are willfully ignoring the cause of technical and business dysfunction. This is the basis of #Noestimates advocates.

To prevent or correct this problem

When mistakes occur, blame your process, not people. Apply Root Cause Analysis to find what allowed the dysfunction to occur? What will prevent them in the future? Assume people will continue to make mistakes and build fault-tolerance into your improvements.

Research shows there are five core reasons people ignore Root Cause Analysis

We just don't have time for root cause analysis - this is a common excuse where there is a fire fighting culture.

It's a Blame Culture - Root cause analysis will inevitably uncover problems in your infrastructure that are the direct result of something incorrectly done—or not done—in the first instance. When there is a culture of finger-pointing and blaming others, people may be reluctant to be involved in root cause analysis efforts for fear of being blamed for creating an error.

Lack of Organizational Will - when complex problems are difficult to resolve, management commitment and support are required.

Lack of Skills, Knowledge, and Experience in Root Cause Analysis - performing RCA requires skills and experience. Both of which can be acquired with ease.

Lack of Detail and Missing Data - this can be fixed by applying the Apollo Method in the Root Cause Analysis link above, to find the conditions and activities that create the dysfunction

One last thought can be found in this blog post I Smell Dysfunction, motivated by the original post from #Noestimates advocates and countered by the advice of the Apollo method author.

48 - In the realm of projects, heuristics are good enough. We don't need more detailed info. We need concrete, actionable info to feed the feedback loop if decision making in e.g. Scope Management. It's people's decisions that lead to on-time delivery, not models

Abstract models reduce the complexity of the real world to digestible chunks that are simpler to understand. This claim fails to understand the role of the model.

While abstract models are just representations, omitting some aspects of the real-world system, they do so temporarily. But they map what we hope to understand into a form that we can understand. Different types of models answer different types of questions about the system they represent, but even if we build many models, they can never answer every possible question about the system. That can only be done by the final system itself.

Both people AND models are needed for success

47 - If you fund projects that are fully specified months ahead (if not years) of their starting date, then you cannot be Agile (no adaptability)

First funding is not budget. Budget is what you've been authorized to spend. Funding is the authorization to spend that budget. The author of this statement appears to not understand the difference. This is a core Managerial Finance principle.

Next is the fallacious notion that Funding somehow locks in the needed Capabilities and prevents agility. This is not true. Agility in delivering the needed Capabilities at the needed time for the need BUDGET is the core basis of Agile. This is one of the Deepty statements that sound important, but when examined in the light of Managerial Finance, Microeconomics of SW development and Probabilistic decision making is a fallacy.

46 - If you measure projects on how well they fake progress (milestone review via powerpoint), spend money and make time passed look like "actuals" (remember the fake milestones), then you get projects that are perfect at spending money and faking progress. Surprised?

Another good example of DSTOP. Why would you fake progress (I know it happens) and call yourself an honest person? The author then makes the unsubstantiated claim that NOT Estimating will fix this problem. Estimating or not estimating hs nothing to do with people behaving badly or in our domain, people behaving illegally in accordance with the business governance process.

45 - Options are the key to agility: more options, more ability to adapt (exercise another option.) Plan-Drive approaches are designed to remove options, therefore control

This is one of those claims that is half right and half wrong, making the claims fully wrong.

Options are needed when managing in the presence of uncertainty. Without options, the risks created by uncertainty - reducible risk and irreducible risk are going to negatively impact the project.

The notion that plan-driven approaches are designed to remove options is simply uninformed ignorance of how planning works.

The very basis of planning - good planning - is to identify options when the uncertainties that create risk come true. The phrase what's Plan-B comes from this principle.

Managing in the presence of uncertainty mandates we have alternatives to the plan since those uncertainties have specific probabilities of coming true in disrupting the current Plan.

The author of this quote is anti-estimates, anti-plans, anti-management, and appears to be anti-doing his homework on how to manage in the presence of uncertainty.

Let's start with some resources for how to manage in the presence of uncertainty, when those paying us need for us to produce needed capabilities, at a needed time, and a needed cost. Here's a quick overview of how to manage in the presence of uncertainty

So let's address the issue in the quote that Plans remove Options. First a correction to the quote

The freedom to choose is the underlying principle of agile development. The term Option in the software development world is Real Option. Real Options is about deferring decisions to the last responsible moment, which is an explicit principle of agile development. By avoiding early commitments, flexibility is gained in the choices to be made later.

Real Options is an approach that allows people to make optimal decisions within their current context. This may sound difficult, but in essence, it is a different view on how we deal with making decisions. There are two aspects to Real Options, mathematics, and psychology. The mathematics of Real Options, which is based on Financial Option Theory, provides us an optimal decision process. The psychology of uncertainty and decision making tells us that we don't always follow the optimal processes and make irrational decisions at times.

So how does planing interact with Real Options? And is the claim Plan-Drive approaches are designed to remove options a fallacy, worse yet simply bogus?

Some choices fall under the title real options:

Depending on the real option value (ROV), that is the value of the option there are choices to expand the project in some new direction, contract the project from one direction, or do both expand and contract. This is the basis of flexibility in the execution of the project. The very heart of Agile is the ability to make changes in direction in the presence of emerging conditions, that is

Initiate a change

Delay a change

Abandon a direction

Plan a new direction

This notion of planning a new direction assumes we have a plan now - a plan that will be changed as the result of new information that allows us to exercise our real option.

Planning is about exercising options

Software development is an investment activity. Managing this investment activity in the presence of uncertainty means make trade-offs for the options that appear from uncertainty. Uncertainty puts a premium on flexibility to change products and plans, but flexibility also incurs costs. Developers can change products and plans as new information comes to light [1]. This is the basis of agile.

The result of applying Real Options to software development is simple

We don't have A Plan, we have Plans (plural) for getting to where we need to go.

The original poster has failed (yet again) to read the literature and understand that the words he's using are incorrect. Google will find everything you need, with the simple phrase "real options" and "planning" and "software development." Don't listen to anyone who hasn't done his homework on a subject. Those in the #Noestimates community are notorious for not doing their homework and this is a prime example.

Finally ...

Real Options is about using estimates to make decisions in the presence of uncertainty, by assessing the options and the impact of those options of the possible choices to deliver the best value for the investment.

So the OP's objections to estimating is also a fallacy when faced with the uncertainties of making decisions based on options.

"Real Options "in" Projects and Systems Design - Identification of Options and Solution for Path Dependency," by Tao Wang, Submitted to the Engineering Systems Division On May 17th, 2005 in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Engineering Systems.

44 - If #Agile has taught us something it is that uncertainty can never be reduced "intellectually" (e.g. Through estimation BDUF or Plan-Driven approaches), only through "ACTION" and empirical observation

This is one of the Deepity statements where the words sound important but are nonsense.

First, what does it mean to intellectually reduce uncertainty?

Uncertainty is a tangible measure of the probability that what you expect isn't going to happen

There is no intellectual aspect to uncertainty

There IS a mathematical aspect

Uncertainty comes in two forms

Epistemic - is uncertainty that comes from the lack of knowledge. This lack of knowledge comes from many sources. Inadequate understanding of the underlying processes, incomplete knowledge of the phenomena, or imprecise evaluation of the related characteristics are common sources of epistemic uncertainty. In other words, we don't know how this thing works so there is uncertainty about its operation.

Aleatory - is uncertainty that comes from a random process. Flipping a coin and predicting either HEADS or TAILS is aleatory uncertainty. In other words, the uncertainty we are observing is random, it is part of the natural processes of what we are observing.

All Observations are Empirical

Empirical evidence is information acquired by observation or experimentation. The process is a central part of the scientific method.

Release Plans and Product Roadmaps in Agile are Plans

Planning is part of all agile processes

The Backlog - Feature Backlogs or Story Backlogs are plans for what is going to be done in the next Sprint or Release

BDUF is a strawman for Bad Management

The author of the OP and other #NoEstimates like to use Big Design Up Front as the stalking horse for agile.

This process is forbidden in our complex software-intensive system of systems domain.

If he sees BDUF, those developers are Doing Stupid Things on Purpose

Reference Posts on the topic of Uncertainty, Risk, and Estimating in the Presence of Uncertainty

The orginal poster apepars to not have done his homework in High School probability and statistcs class and doesn't understand the differences between Aleatory and Epistemic. Don't do the same, learn about estimating and resutling risks in the following resources.

43 - Understand one thing before jumping into #NoEstimates debate: product and software development for *real* use is a DISCOVERY problem, not a delivery problem. Estimation focus on delivery, not DISCOVERY

All product and software development operates in the presence of uncertainties. Either Aleatory uncertainties that result from the naturally occurring variances in the work. Driving from my home to the airport to commute to the job site hs aleatory uncertainty. Google says it 47.1 miles and will take 51 minutes. Rarely if ever has it taken exactly 51 minutes. No person would plan the airport trip based on that number. Your chances of being late are high. I always allow 2 hours, since I have a special parking spot, that I have a very high probability of finding, where I can walk straight into the check-in counter, drop my bag and get in the Clear line, board the train and go to the gate. The travel time to the airport can have naturally occurring variability and it can have a probabilistic event uncertainty. The speed limit is 75MPH, but sometimes the traffic is slower. This is a naturally occurring variability. There is also a probability there will be road construction, a traffic stop that slows down all the other traffic, the start of a snowstorm or other probabilistic events.

Duration in software development productivity has Aleatory Uncertainty. If you've concluded that it will take Exactly 18 hours to develop a Story, you'll be disappointed when you encounter some delay or even some way to develop the Story faster.

Aleatory uncertainties create irreducible risks. These risks can only be handled with margin. Schedule margin, cost margin, and technical margin.

The second type of uncertainty of product and software development is Epistemic Uncertainty. This uncertainty creates a risk that has a probability of occurrence.

So let's deconstruct the statement in light of these uncertainties that creates risk

If your developing products and software development that is NOT for real use, either you're a student doing your homework, or you're wasting your customer's money

This is one of the deepity statements many in the agile community like to make

In the presence of uncertainty, all development is a discovery problem

This means the Capabilities needed by the customer or product manager are defined in terms that may change as the software is developed

The notion that requirements are fixed and immutable is nonsense, even in formal acquisition processes, this is the role of Change Control.

If there are no uncertainties (epistemic or aleatory) then either this project operates in a realm unheard of in the Real-world or, the project is de minimis so what variances that impact the project are so low they have no impact on the probability of success

So,

To deliver we need to know something about the uncertainties on the project - reducible (Epistemic) and irreducible (Aleatory).

To deliver when those paying need to put your work to use, they need some sense of when it will be available for use.

For those paying for your work, they also need to know something about the cost of your effort, since Value cannot be determined without knowing the cost to produce that Value, AND when that Value will be arriving.

To deliver the needed Capabilities (and their Features and Stories), for a needed Cost (to meet the Value equation), at the needed time (to meet the start of revenue generation and fulfill the time cost of money) for the capital used to develop the product and if there are uncertainties (reducible and irreducible) that create risk, you're going to have to estimate the uncertainties, their impacts on the probability of success for the cost, schedule, and technical performance of your product.

This is an immutable principle of Microeconomics of Software development, Managerial Finance, and Probabilistic Decision Theory.

The #NoEstimates advocates appear to willfully ignore these principles and the practices of making risk-informed decisions in the presence of the uncertainties encountered on ALL software development projects. Ignore their advice, since they failed to understand ...

Risk Management is How Adults Manage Projects - Tim Lister

42 - Estimation is a dysfunctional illusion. You can't know how long things take. *BUDGET* investments & use #ContinuousDelivery to be collecting feedback as you go. It is IRRESPONSIBLE to make an investment based only on an estimate (60% delays are common)

Let's deconstruct this conjecture:

Estimation is a dysfunctional illusion - for someone like the OP it may be, since he appears to be wholly uninformed how software estimates are made, used, and validated. Of course, here's a starting point. Agile Estimating. Now if you don't read how to estimate, don't practice the advice in these papers, look into the tools for estimating software, then you have NO basis on which to claim estimation is a dysfunctional illusion. It's just unsubstantiated opinion.

You can't tell how long it will take. Another unsubstantiated opinion. If you don't know what Done looks like in some meaningful units of measure, you probably can't tell how long it takes. This means

You have no product road map

You have no release plan

You have no experience developing software in this domain

You pretty much know nothing about the problem or the solution

Budget investments and use continuous delivery - delivery of what? Any road map

Any uncertainties in that delivery process?

Any variabilities in the productivity of the development team?

Any changes in the needed capabilities?

Any reducible risks?

Any irreducible risks

If NO, then just code anyway and deliver code

If YES, you'd better be estimating

It is irresponsible to make an investment based only on an estimate - yes. NO ONE does that. At least no credible business manager does. This is the classic tautology of the OP posted as somehow new information. This is called deepity is a proposition that seems to be profound because it is logically ill-formed. It has (at least) two readings and balances precariously between them. On one reading it is true but trivial. And on another reading it is false, but would be earth-shattering if true.

60% delay is common - WHY. No Root Cause is ever provided by the OP for these claims. Without the root cause (forgetting the number is bogus) then any suggestion is bogus.

So, in the end, it comes down to this

There is no principle by which you can make a decision in the presence of uncertainty (reducible - Epistemic or irreducible - Aleatory) for a non-trivial spend of other people's money without making estimates of the impacts of that decision. Any claim that you can willfully ignores the principles of managerial finance, microeconomics of software development decision making, and probabilistic managment control of the business.

When you hear you can, that person is willfully ignoring these principles and is a prime example of Doing Stupid Things on Purpose.

Some in the agile community that have drunk the Koolaid of #NoEstimates firmly believe that estimates are waste, that decisions can be made in the presence of uncertainty without estimates, that focusing on producing Value with no consideration for the cost to produce that value, when that value will arrive, Measures of Effectiveness and Measures of Performance for that Value, or measures of anything used in a closed-loop control systems to manage the work effort in the presence of uncertainty is needed.

Just start coding and the customer will tell them when to stop. Here's an EDS video from the same advertising firm the inspired me to name my Blog Herding cats.

A #Noestimates advocate claims that having a ±10% accuracy for estimates of cost and duration is a dangerous thing. With what appears to be NO understanding of how to estimate, this author ignores the processes used in developing products or services in the presence of uncertainty.

In our software-intensive system of systems domain, we develop proposals with 80% confidence of completing on time and on budget at the time of submission. The ±10% value has no context (as usual), but that range is certainly possible using the processes of probabilistic modeling of the project.

Here's how:

Build a model of the needed Capabilities

Define Reference Classes for those Capabilities and the Features that implement them. We develop these reference classes using Agile Function Points.

There are databases for Function Points

Use these to develop a Systems Model of the products

Define the probabilistic ranges of the work in a single point estimate manner

This means defined the Most Likely value for the range of duration or cost for the item

Define the upper and lower bounds of this Most Likely value

Define the Probability Distribution Function for this range. Use the Triangle Distribution when there is no past performance data

Define the dependencies between the Capabilities and the Features in some form

If not these approaches, some process of making the connections between the Capabilities, the Features, and the outcome visible. For any Agile development tools (Rally, JIRA, Team Foundation Server) have embedded tools for making these charts.

Define the risks - reducible and irreducible - to each Capability and their Features

For each risk define the probability of occurrence, the probability of impact, the probabilities of duration or cost impacts from that impact, the probability of success for the corrective or preventive actions, and the probability of any residual risk

Place all the information into some modeling tool

If you don't have one, ask this critically important question

What is the Value at Risk for your Project?

This is the core question for any discussion of the need for, the value of, or outcome of estimating. If your Value at Risk is low, then all this is not likely to be of concern.

But without the answer to that Value at Risk question any suggestion to do anything is baseless, since there is no consideration for the impact of the suggestion to Not do something

So how do we get ±10% accuracy?

Apply margin for the irreducible uncertainties that create the risk.

Work performed using the budget for the Reducible uncertainties that create the risk.

Both of these actions cost money. You can spend money to buy done risk and that buy down outcome reduces that variances in the cost and schedule and increases the probability of projects success for that cost and schedule.

This is a closed loop system optimization process applied to all the projects we work.

With this process, you can get a ±10% range for any estimate. Is this normal? That's a different question. 80% confidence of on or before and at or below is our norm. But the firm where the ±10% range was needed may well have a need to control the Value at Risk for the project.

39 - You Don't Need to Know What Done Looks Like, Just Have a Small Plan to the Next Point

There's an ongoing notion in the agile domain that we don't need or even want a Plan that shows us what done looks like. That the agile team is exploring new territory and the plans (maps in an analogy) are of little use.

It is about the inadequacy of maps when you are navigating *new* territory. There's only so much we can predict up front.

A nice platitude, but platitudes don't put money in the bank. And money in the bank is what software development is about. If you're navigating new territory without some kind of map, you're lost and are unwilling to admit it.

It's straight forward to construct a map of the territory:

What capabilities do the customers need in order to spend money on the product or service?

If you don't know this at some high level, do not spend a dime.

If you can't state one or two needed capabilities a customer would be willing to pay money for, you're unqualified to be spending firms money.

If you don't anything about what customers might pay money for, then you're in the wrong business.

With this short list of capabilities, what problems would they solve for those willing to pay for them?

In your experience, what might the effort and time needed to produce one of the capabilities?

Don't know the answer to that, those paying you should go find someone who does.

The notion of the inadequacy of maps is really a statement about the inadequacy of YOU to build such a simple, first cut, top-level map. This is likely the situation the Original Poster is in.

One analogy of this condition of the Watchmaker and the Gardner. [1], [2]

When a system is bounded with relatively static, well-understood requirements, classical methods of development are applicable. At the other end of the spectrum, when systems are not well defined and each is individually reacting to technology and mission changes, the environment for any given system becomes essentially unpredictable.

The metaphor of the watchmaker and gardener is useful to describe the differences between development in the two types of environments. Traditional software development processes are like watchmaking. Its processes, techniques, and tools are applicable to difficult problems that are essentially deterministic or reductionist.

Like gardening, development of software products draws on the fundamental principles of evolution, ecology, and adaptation. It uses techniques to increase the likelihood of desirable or favorable outcomes in environments characterized by uncertainty and that may change in unpredictable ways. This approach to development is not a replacement for classical development. It is a method to get started. Both disciplines must be used in combination to achieve success.

Now to the Next Step

If you're spending your customers money and you AND your customer don't have some shared sense of what direction you're going toward - some goal, the pace you are making toward that goal (velocity, by the way, is not a single number. Velocity is a Vector. It has Direction and Speed). And if you don't know how long it's going to take - to some agreed upon accuracy and precision - to get to a stopping point, then that customer is spending her hard earned money, with no idea of what done looks like.

This is called a Death March project. And those suggesting that Plans are not needed beyond the next Sprint are on a Death March by their own choice.Another DSTOP.

This might be the case in some science experiments. But even that is nonsense. Our son is a scientist, with funding from outside sources, and when they provide him with money they have an expectation of a deliverable at the end of the period of performance.

This is an example of someone making a claim with little or no understanding of how business works, how those paying for value manage their money in the presence of uncertainty. Another example of DSTOP, in this case Doing Things with NO understanding of Managerial Finance, Microeconomics of Software Development, or Probabilistic Decision Making. If the OP would have simply Googled the topic he too would have found materials we use in our domain to manage software development projects in the presence of emerging uncertainty, starting with little certainty as to the needed capabilities.

References

When ever you hear a conjecture, the first thing to do is go to Google and start exploring to see if that conjecture is credible if there are already answers to the supposed unanswered questions, and to learn for yourself if the person making the conjecture has any credibility. Here's a quick - under 5 minute - sample on how to make plans in the presence of uncertainty.

[3] "A Framework for Understanding Uncertainty and its Mitigation and Exploitation in Complex Systems," Dr. Hugh McManus and Prof. Daniel Hastings, Fifteenth Annual International Symposium of the International Council on Systems Engineering, 10-15 July 2005

38 - Failure to Understand Planning in Presence of Uncertainty

The moment you stop believing you can predict the future, waterfall planning approaches make no sense

When managing software development in the presence of uncertainty, precision and accuracy are variables that must be defined before any estimates are made. This is independent of the software development processes - be it agile or traditional.

Waterfall is a term no longer used in the domain I work. It's also a code word for bad project management. Iterative and incremental are standard practices from software development of large construction (Lean Construction). The use of Waterfall is a dog whistle for those willfully ignoring the principles of managing anything in the presence of uncertainty. In this case people have been convinced that estimates are not needed by those paying them for their work.

37 - Judgment from Experience Requires Repeatability

Judgment from experience requires repeatability (experts work best in Complicated or Ordered cynefin domains)The moment you have little-to-no repeatability experts are at best useless, adaptability is a better survival strategy.

If the work is non-repeatable, the expert's experience is absolutely necessary for a simple reason.

If you're not an expert, you're not going to recognize the possible solutions, risks, impediments, and opportunities for the problems you'll encounter in developing a solution that has never been developed before. This is why we hire experienced experts, they keep us out of trouble. To be very crass, if you're not an expert, you're very likely to not know WTF your doing.Seems the author of the quote fits that description.

This knowledge starts with Reference Class Forecasting which is a method of predicting the future (cost, schedule, technical performance) by looking at similar past situations and their outcomes. Reference class forecasting predicts the outcome of an action based on actual outcomes in a reference class of similar actions to that being forecast.

Where do you find these Reference Classes?

For cost and schedule, there are databases containing 1,000's of past projects.

For technical design, there are existing designs, patterns, packages, architecture references, and similar resources. Yes, you have to pay for them, but that cost is cheap compared to a naive and novice approach based on experimenting with other people's money.

In all engineering worlds, from software engineering to bending metal for money, there is really nothing new under the sun. If it is truly new and never before seen, then it's called a science experiment. Rarely are software engineers working on science experiments. And even if it a science experiment (both our children work in the science world) where the very first thing you do is a literature search to see what other people have done in your field around the question you are trying to answer.

In our domain of aerospace and defense, we have reference class for cost, schedule, architecture (DODAF), risk and other attributes of every system ever built in the DOD (CAPE, CADE) and NASA (CARDe/Once).

Your domain may have NO reference classes. This is why you should hire someone who's done this before and ignore those you state the fallacy that...

Judgment from experience requires repeatability. The moment you have little-to-no repeatability experts are at best useless, adaptability is a better survival strategy.

Adaptability is of little use if you don't know the boundaries of the technology, processes, and uncertainties of the problem. If you don't know, it's a wonderful way to spend the money of those paying you to learn (the hard way). So check with them first if they're willing to fund your education to learn how to solve problems in a domain you have no expertise in when you could have simply hired someone who does.

As another observation, Cynefin tells us four quadrants of systems. But it tells us NOTHING about how to stay out of those quadrants, what actions are needed to move from one quadrant to the other. As a Systems Engineer, it's an interesting notion but has NO actionable outcomes other than the making of an observation.

It's like standing on the dock watching the ships passing by, when what is needed is to be on the bridge at the helm of the ship making sure it leaves the harbor safely. Take a look at "Complexity Primer for Systems Engineers" as a start. And then some more resources for managing in the presence of complexity

36. The moment you have little-to-no repeatability experts are at best useless

Yet another class #NoEstimates lack of understanding of estimating, reference class forecasting, reference class databases, parametric estimating, estimating tools, and other well established estimating processes in the agile software development world.

Judgment from experience requires repeatability (experts work best in Complicated or Ordered Cynefin domains) The moment you have little-to-no repeatability experts are at best useless, adaptability is a better survival strategy. I couldn't see this in the text.

35. It is in doing the work that we discover what work we must do

This is the classic #Noestimate vision of how software is developed. But what it really says is

I don't have a clue what done looks like, so I'll just start spending my customer's money and she'll tell me what to do, when to do it, when to stop doing it

Or the companion quote from an Agile thought leader

Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before. If we had done it before, we’d just give it to you.

That way, you can start ignoring the advice of those Thought Leaders when they say I don't really know, but here's how I'd start and replace that with I have some notion of what this software needs to do, let' me go look at some reference database, build my ConOps model and get a first-order estimate and then we can refine that with more input.

So here's the way out of this dilemma of not knowing what to do with the only solution of start coding. This solution is called Software Engineering. Let's start with the obvious...

Hire someone who does

Unless you're inventing new physics, there is someone, somewhere that has some clue about the effort, duration, and cost to develop what you what.

I've worked on inventing new physics projects and in our proposal to the USAF Office of Scientific Research, they asked for an estimate of the needed funds, and what might we find at the end of those funds in exchange for their investment.

The answer to the question is to build a parametric model of the system's highest programmatic and technical architecture and ask simple straightforward questions about the rough order of magnitude estimates, of cost, schedule, and technical performance.

So even the inventing new physics excuse is lame.

Subscribe to one of several reference class estimating sites

NESMA - an independent international organization focused on software metrics and software measurement.

COSMIC - is a voluntary, worldwide grouping of software metrics experts. Started in 1998, COSMIC has developed the most advanced method of measuring a functional size of software. Such sizes are important as measures of software project work-output and for estimating project effort.

Develop a Concept of Operations and use Function Point Analysis to develop an estimate

From that ConOps, decomposed the elements using Functional Point Analysis, place those elements in an IFPUG tool and see what it produces

Use one or all the reference class sits above to calibrate your FP model

34. At the #NoProjects #NoEstimates workshop it was mentioned that focus on "on-time, on-budget naturally leads the project to meet those goals and forget the value/benefits side of the equation

We have a quote we use when we're called onto a program that is DSTOP, that goes like this...

What's the difference between this Program and the Boy Scouts? The Boy Scouts have adult supervision

This is yet another example of Doing Stupid Things on Purpose. If we take Tim Lister's advice about managing risk while spending other people's money in the presence of uncertainty, where is the adult supervision here?

33. The First Step in Any Project is to grossly Underestimate its Complexity and Difficulty and its companion from the same author Finishing a project "on budget and on time" is not an indication of success. It is merely the result of gaming and easily gamed system.

These are classic examples from an author who is either unskilled, untrained, and inexperienced in estimating software development. Or who willfully ignores the knowledge and resources readily available in textbooks, papers, tools, and training for how to create credible estimates for software systems. I suspect the latter.

What these quotes actually say is I have no intention of learning how to estimate cost, schedule, and technical performance because I don't want to. My customer doesn't care what I do. And, my customer is equally as clueless about the need to estimate as I am.

I know it's your money, but I'm going to ignore that and behave as if it's my money and I'll any damned thing with it I want, you can tell me how to spend it.

This sums pretty much sums up the basis for the #NoEstimates argument.

32. There are always uncertainties, that's why estimation does not work. We have prioritized work, not deadlines. We measure the outcome of value, not time.

This is one of those patently false conjectures. It is exactly because of uncertainties that estimates are needed.

If you're working on a software project with NO deadlines, then you're a very lucky person. What that means is those paying you have no need to recoup their investment on any timeline. Just keep spending until told to stop spending. Prioritization then becomes the order in which those paying you want you to work on. They don't care about cost, risk, schedule, the probability that they'll get what they paid you for. You just labor and someone else outside your domain will be making the decisions.

But an open question remains. How do you prioritize the work in the presence of uncertainty about its Value, Cost, Effort, and Duration to produce that Value, without making an estimate?

31. Read the NoEstimatesBook and come to the workshop to see how I boldly claim that it can save your company millions.

Save Millions of Dollars? Does that make sense to anyone who has worked in software development? Probably not.

Exactly how these savings are achieved are not actually stated in the book. This is the claim from the book

Let’s say you approach a traditional consultancy business and tell them “hey, I can show you how to turn that 8 year - 500 people - 250 million euro project for your customer into a 1 year - 10 people - 0.9 million euro Agile project.”

OK, let's say you can. Any evidence that NOT Estimating is the corrective or preventive action is the cause that enables the claimed outcome (other than dropping scope to meet the deadline and cost target). Any project of that size can likely save a few million by just simple effectiveness and efficiency improvements.

But the author claims:

The duration can be reduced from 8 years to 1 year, that's an -92% decrease in the duration of the project. For the same scope? Didn't say. By simply NOT Estimating? Show how that is done

The cost reduced from 250 Million € to 0.9 Million € a 99.64% reduction in the cost. For the same scope? Didn't say?

The headcount reduction of 500 people to 10 people, a 98% reduction in absorbed labor. All by NOT Estimating? With no evidence of how to do that by the way.

There's one way to do this. Don't deliver 98% of what is needed. Then claim Not Estimating fixed the problems.

This is a continuous claim by #NoEstimates advocates that has not one piece of credible evidence to back it up. No case study, no data, no description of what was the source of these savings other than NOT Estimating, no description of the processes, other than Not Estimating, no Root Cause of the core problems with the project in the first place, that when corrected or prevented would have removed cost, schedule, and headcount impacts. No nothing

As a physicist, there is a quote we learned in graduate school when someone comes in the makes an outrageous claim - a cockamamie claim actual

This isn't right. This isn't even wrong

The phrase "isn't even wrong" describes an argument or explanation that purports to be credible but is based on invalid reasoning or speculative premises that can neither be proven correct nor falsified. Hence, it refers to statements that cannot be discussed in a rigorous sense. [1] For a meaningful discussion on whether a certain statement is true or false, the statement must satisfy the criterion called "falsifiability" — the inherent possibility for the statement to be tested and found false. In this sense, the phrase "not even wrong" is synonymous to "nonfalsifiable". [1]

The phrase is generally attributed to theoretical physicist Wolfgang Pauli, who was known for his colorful objections to incorrect or careless thinking. [2][3] Rudolf Peierls documents an instance in which "a friend showed Pauli the paper of a young physicist which he suspected was not of great value but on which he wanted Pauli's views. Pauli remarked sadly, 'It is not even wrong'."[4] This is also often quoted as "That is not only not right; it is not even wrong", or in Pauli's native German,

"Das ist nicht nur nicht richtig; es ist nicht einmal falsch!".

Peierls remarks that quite a few apocryphal stories of this kind have been circulated and mentions that he listed only the ones personally vouched for by him. He also quotes another example when Pauli replied to Lev Landau, "What you said was so confused that one could not tell whether it was nonsense or not." [4]

This is now the Archetype of the #Noestimates arguments.

#NoEstimates is not falsifiable

The conjectures cannot be discussed in any rigorous sense since there are no principles by which decisions can be made in the presence of uncertainty without estimating the outcomes and impacts of those decisions. Do suggest there is violates the principles of Microeconomics, human decision making in the presence of scarce resources, Managerial Finance, and Probabilistic Decision making.

The conjecture cannot be tested to be True or False

In any field based on principles (Microeconomics of software development is such an example) and the practices and processes of applying those principles, there are two distinct ways of not being correct:

For something to be wrong it must have some grounding in reality and must follow some legitimate string of logic. It may be wrong, but it had the theoretical possibility of being right.

To be “not even wrong,” something must be so far off from reality or contain such a glaring logical flaw that it at no point would reasonably be considered correct.

#NoEstimates is the latter. The claim of #NoEstimates is not an error in logic or reasoning, it's pseudoscience, designed to play to the fears of software developers that when they provide an estimate it will be misused and abused by their Bad managers, so the fallacy is let's not make estimates and the outcomes of the project will be acceptable to those paying for the work.

This post goes along with the fallacy posted prior - Risk is not there to be mitigated, it's there to be eliminated. And my other favorite from the same author Estimates leads to buffers. Buffers lead to waste. Waste leads to ruin.

Uncertainty comes in two forms

Reducible (Epistemic, the study of knowledge)

Irreducible (Aleatory, Latin for a single die)

There are four kinds of reducible (Epistemic) uncertainties that create a risk to software development projects

Reducible Cost Risk - is often associated with unidentified reducible Technical risks, changes in technical requirements and their propagation that impacts cost.

Reducible Schedule Risk - Schedule Risk Analysis (SRA)(Monte Carlo Simulation, Method of Moments for example) is an effective technique to connect the risk information of project activities to the baseline schedule, to provide information on the sensitivity of individual project activities to assess the potential impact of uncertainty on the final project duration and cost.

Reducible Technical Risk - is the impact on a project, system, or entire infrastructure when the outcomes from engineering development do not work as expected, do not provide the needed technical performance, or create higher than the planned risk to the performance of the system.

Reducible Cost Estimating Risk - is dependent on technical, schedule, and programmatic risks, which must be assessed to provide an accurate picture of the project cost. Cost risk estimating assessment addresses the cost, schedule, and technical risks that impact the cost estimate.

There are three kinds of irreducible (Aleatory) uncertainties that create the risk to software development projects

Irreducible Schedule Risk

Projects are over budget and behind schedule, to some extent because uncertainties are not accounted for in schedule estimates. Research and practice are now addressing this problem, often by using Monte Carlo methods to simulate the effect of variances in work package costs and durations on total cost and date of completion. However, many such project risk approaches ignore the significant impact of probabilistic correlation on work package cost and duration predictions.

Irreducible schedule risk is handled with Schedule Margin which is defined as the amount of added time needed to achieve a significant event with an acceptable probability of success. 8 Significant events are major contractual milestones or deliverables.

Irreducible Cost Risk

Irreducible cost risk is handled by Management Reserve and Cost Contingency are program cost elements related to program risks and are an integral part of the program's cost estimate. Cost Contingency addresses the Ontological Uncertainties of the program. The Confidence Levels for the Management Reserve and Cost Contingency are based on the program's risk assumptions, program complexity, program size, and program criticality.

When estimating the cost of work, that resulting cost number is a random variable. Point estimates of cost have little value in the presence of uncertainty. The planned unit cost of a deliverable is rarely the actual cost of that item. Covering the variance in the cost of goods may or may not be appropriate for Management Reserve.

Irreducible Technical Risk

If we use the definition of Margin as the difference between the maximum possible value and the maximum expected Value and separate that from contingency is the difference between the current best estimates and maximum expected estimate, then for the systems under development, the technical resources and the technical performance values carry both margin and contingency.

So managing in the presence of uncertainty and the risk it creates mandates making estimates

Estimates of how much margin is needed to protect the project from the irreducible uncertainties (aleatory)

This margin is usually calculated using some form of simulation, or a reference class

For Epistemic uncertainty that creates reducible risks, a specific risk buydown process is needed to remove the undesirable comes.

Buy two in case one breaks

Build the system with a 20% performance buffer to handle the unanticipated workload

Build a fault-tolerant and fail-safe system behavior (this was my specialty many years ago)

The author of those quotes loves to reference macroeconomics texts, ignoring the fact that software development is microeconomics. It sounds impressive when he does this, but it's not applicable to making decisions in the presence of uncertainty while writing software and using other people's money for markets where the buyer are making decisions on the technical value received in exchange for the cost.

29. Only through a survival heuristic like Kelly Criteria. You survive uncertainty, you don't remove it.

I've split out the second part of the post above to address this fallacy directly

The Kelly Criteria is a gambling paradigm for knowing how much to bet.

This can be applied to investing in a portfolio or applied in a casino.

If you think writing software for money is like throwing the dice in Las Vegas, then stop reading right here.

Those financial instruments you're investing in have externalities driving them.

The global and national market, the behaviors of the firm

The competition

The financial management system in terms of financing rate, cost of money, the debt market.

Software development projects have some externalities driving them - ontological uncertainties - but it would be a very naive risk manager who let those externalities control the project

A risk management plan defines the reducible and irreducible uncertainties that create a risk

A risk handling plan defines these as:

Mitigation

Avoidance

Transfer

Avoid

The original poster uses macroeconomics terms from a couple of books he's read by a controversial author from the bond trading domain and applied them to software development projects.

This is an Unequivocal fallacy, used often by those wanting the established principles to not be applicable to their domain.

28. Agile is about responding to change, over following a plan. We must recognize Estimates endanger that goal

How estimate endanger the goal of responding to change is not stated. As well responding to change over following the plan is an example of not knowing what Planning is about.

Plans are Strategies. Strategies are Hypotheses. Hypotheses require empirical data to test the hypothesis. This is basic high school scientific method stuff, that appears to willfully ignored by the writers of that Agile Manifesto Phrase. If you don't have a plan in some form, with tests of the progress to plan needed to take corrective or preventive actions, then you're on a Death Marchproject. If you're spending other people's money, you won't be for long, because they will - or should - fire you for being incompetent as the steward of their funds.

Since making decisions in the presence of uncertainty requires buying knowledge about the possible outcomes of our decision, we need to have knowledge of both reducible and irreducible risk created by uncertainty. Reducible risk can be bought. Irreducible risk can only be protected against with margin.

If you don't have a Plan, you don't know what Done looks like in any way meaningful to those paying for your work. In that case, you're on a Death March project, whose only stopping condition is when you run out of money or time.

This is the purpose of the Product Roadmap and Release Plan.

It can as simple as a list of needed Features on sticky notes on the wall.

It can be a complex as a scrum or scrums strategy map in SAFe 4.2 in Rally.

But with some visible picture of what Done looks like, no one writing code knows why or what they doing that work for. They're just spending the customer's money for no defined reason.

Plans are strategies

Strategies are hypotheses.

Hypotheses require tests to confirm they are valid (just like you learned in your High School science class.

Agile provides the tests to the hypothesis through working software.

All projects operate in the presence of uncertainty, reducible and irreducible.

The corrective and preventive actions needed to address the risk produced by those uncertainties are in the Plan, otherwise, when they occur you're caught flatfooted with no plan, and those paying you, have doubt you can deliver what you said you were in exchange for your paycheck.

Managing in the presence of uncertainty requires making estimates since the future is uncertain.

Since risk management is how adults manage projects - Tim Lister

#NoEstimates means the obvious alternative to managing as an adult.

27. How to (write software with agile) (1) Define the most important thing. (2) Work ONLY on that until finished. (3) Repeat

This is a good way to work in priority order, but tells us nothing about when that backlog of work will be done.

Where's the Product Roadmap and Release Plan?

Where's the process that defines that priority order?

Product Roadmap

Release Plan

What're the Measures of Effectiveness and Measures of Performance that define the priority order?

How about the Key Performance Parameters for those most important things?

Where's the estimate to complete developed from the past performed, risk-adjusted for future uncertainties - reducible and irreducible?

This notion is suggested by a leading agile thought leader, but it completely ignores the process of writing software when there is a needed delivery date, a needed budget, and a needed set of Capabilities for that time and budget.

26. Estimation destroys making decisions in the presence of uncertainty by premature commitments.

This is literally willfully ignoring the established, documented, tested, verified, validated processes of good software estimating.

Nothing else to say here. The author of that phrase must never have read a single book or attended a single class or observed a single successful estimating process. OR the Author is "selling" us a pig in a poke.

I think it's the latter.

25. Have 300 Product Backlog Items that you groom every two weeks.

Where's the Product Backlog and Release Plan showing when those Capabilities, Features, and Stories will be needed and when you should start definitizing their content, acceptance criteria, and top-level estimates?

Where's the Rough Order of Magnitude (not 10x, but just a rough estimate) estimates for each Feature in some Tee-Shirt size mapped to hours, taken from the empirical data of the past performance, collected from your agile software development management tool - automatically?

XS - 1 to 4 hours

S - 5 to 12 hours

M - 13 to 24 hours

L - 25 - 48 hours

XL - 49 - 64 hours

Set the number of hours per day 6, to cover other work.

For XL Stories, they will be sliced into smaller stories, as has been done for 30 years in all good software development domains.

The Development team will confirm the Product Owner's estimate in Tee-Shirt sizes makes sense from their understanding of the Story and the historical data captured in the development management tool, from all the past work (Reference Class Forecasting) to confirm the PO's estimate is credible.

Prior to the Sprint becoming active, Sub Task hours are estimated by the Development Team in a "Story Time" session before a "Capacity Based Commitment" is made.

Where's the Product Owner to produce those ROM estimates so the developers don't have to?

Why are you reestimating work that hasn't be definitized yet? Work that may not be started for months?

Why you grooming Stories and Features for Sprints beyond the next two. This is supposed to be an Agile project, where feedback from the User drives the emergence of new and better requirements? You're locking the requirements for Features and Capabilities that haven't been verified by "working software."

Another perfect example of Doing Stupid Things on Purpose, then claiming NOT estimating will fix them.

24. Have a business and technical management process that only produces 8 hours a week of productive work. Where up to 80% of the duration of the work is chewed up by delays, dependencies, interruptions, illness of workers and related absences of the workforce.

Where in the business is the Adult Supervision to allow the workforce to only has 8 hours of productive work out of the 40 hours of available work during the week?

Think about this. 96 minutes out of 480 minutes a day are used to generate value in exchange for the 480 paid planned work.

A sample of 6 departments and firms, shows about 32 hours of planned productivity a week (80%) and a measured productivity of 30 hours a week (75%). The non-productive hours include break, training, travel, non-project meeting.

The actual number of "sick" hours is very low on an annual basis, but those hours are baked in the PTO (Personal Time Off) baseline across the year and easily modeled in the capacity planning baseline for the Scrum team.

23. A deadly sin in estimation: estimating the NUMBER of people for a project without any understanding of "team" or collaboration

If that's the level of understanding, skill, and experience in estimating, then those paying for the project need to find a better person to do the estimating.

This is one of those "toss" off lines, with no understanding of how estimates are actually made. Which is becoming clearer as time passes.

22. When you read about examples of bad management, like misusing estimating, misusing tools, misusing processes, even misusing paradigms, and you don't hear about how to Prevent or Correct those misuses, then's that's the very definition of Doing Stupid Things on Purpose. I'll start listing the links and examples of this classification of DSTOP below this bullet. Here's the first one

Cost accounting has its place - in deterministic systems, not complex ones. - No, it's not. Cost accounting is even more important in complex, evolving, emerging, uncertain environment.

In the presence of these uncertainties, cost accounting is critical, since the variabilities of the budget impacts from the actual costs must be known to some degree of confidence to make credible decisions for the future.

21. A classic fallacy of #Noestimates - Estimates: Never credible if the person giving them is not risking anything when giving them.

This willfully VIOLATES the core principle of Independent Cost Estimating (ICE) which is an independent cost estimate process to assist in determining the reasonableness or unreasonableness of the bid or proposal being evaluated and is required for all procurements regardless of dollar amount. Research shows, if those making the estimate do have skin in the game they're going to game the system.

The principles of an ICE include

Developing the estimate without contractor influence

Define and validation the best value and shared contract risk

Based on market research, reference classes, parametric models from those reference classes

An analysis of reasonable and required resources for performance to planned work

The project, anticipated, or probable cost and price of the proposed solution

A benchmark for establishing cost/price analysis of the proposed solution

The Independent Cost Estimate is used to

Project and reserve funds for the procurement as part of the acquisition planningprocess

Determine if assumptions in a cost proposal are based on the same or similar assumptionsas used by the firm acquiring the solution

Satisfy the governance and oversight requirements of the firm providing the money

Everywhere I and everywhere others in our community of Software-Intensive System of Systems works, the ICE teams validate and verifies to cost and schedule estimates on behalf of senior management and the firm, but of Tim Lister's advice

Risk Management is how Adults Manage Projects

20. Listening to a #NoEstimates talk where most of the topics are blatant examples of Doing Stupid Things on Purpose.

It seems there is money to be made in conferences, training, coaching and maybe even consulting confirming that the client is DSTOP but the fix is NOT to fix the root cause, but simply stop doing that dysfunctional action. This, of course, leaves the root cause of the project's failure in place, while providing NO solution other than a feel-good session where complaining about bad management takes place.

19. Let me keep Features and Stories in the Product Backlog for 6 years.

Never asking the simple question do we need these features for the Capabilities in the Product Roadmap? And when will we need them to meet or plan to deploy them to the market or to our internal user community?

18. Product Roadmap? we don't need no stink'in Product Roadmap. It's a waste, and all that grooming of the Roadmap and Product Backlog is just waste. Let's code the next important thing and have the customer tell us what to do next.

So those paying have no visibility to when the needed capabilities - which are composed of Features - will be ready.

No visibility to the increasing value delivered to the customer and when that Value is planned to arrive, so the customer can plan as well.

No visibility to the Estimate to Complete and the Estimate at Completion.

No visibility to the reducible and irreducible risks, when they will be bought down. How much margin is needed?

17. Let's ignore all the well-known biases for estimating, and just continue on as if they didn't exist. Let's ignore all the well known preventative and corrective actions to address those well-known biases and pretend we have no choice other than be subject to and subjugated by them.

A good example of DSTOP.

16. Let's ignore the fact that all projects operate in the presence of uncertainty (aleatory and epistemic) and assume that simple, non-risk adjusted past performance will be the performance of the future, and we can use that to forecast what will happen in that future, with no adjustment for past variance, or emerging uncertainties or the range of variances in the future.

DSTOP at it's best.

15. Let's rename established estimating processes about the past, present, and future to a term that we name "not estimating" - Forecasting.

When we hear willful ignorance of basic high school mathematics you have to wonder what else don't they know.

14. The claim we can make decisions with past data without assessing if that data represents the future performance of the project. And use that data without estimating the possible variances of the emerging future behaviors. And call that forecasting and that way not have to say that approach is estimating in support of our #NoEstimates moniker.

This goes back to day one when the originator of the has tag claimed you can decide in the presence of uncertainty without estimating.

13. Let's spend more on estimating than on the effort to produce the product.

DSTOP at its best.

12. Let's make a change control process that allows no changes, or is so hard that change is made and no one knows about it.

11. Let's use under-sampled, non-statistically adjusted past performance for future performance and ignore that the past may or may not be in the future.

This is the classic #NoEstimates argument of #Noestomates - "we use empirical data, we don't estimate"

Of course that empirical is a "time series" with random values driven by the underlying uncertainties of the past.

The naive assumption that the future is like the past - statistically as well as behavioral - is just that naive.

10. When I say NO I really mean YES or Not Really, or Not everywhere.

9. I've given many seminars and asked people what their problems are and take all those into account for my approach. This was a self-selected group, none of which had financial accountability to the firm they worked for.

8. I wasn't paying attention in the Statistics, Microeconomics, or Business Management class, but listen to me anyway because I've got a lot to say about those topics.

7. I know I get a divide by zero error when calculating ROI, but hey who cares, it's just someone else's money they won't really care.

6.. Let's rename standard mathematical terms to fit our oxymoronic concepts of how to avoid telling those paying our salaries how much this will cost in the future.

5. We accepted a cost estimate from our bosses that was lower by 10x to 100x from the actual cost.

4. We started developing software without really understanding what Done looks like.

3. We accepted this project for the price the customer wanted to pay and we'll discover the requirements as we go along.

2. We think we can make decisions about how to spend other peoples money without having to estimate how much money, time, or the probability that we will successfully deliver what we promised for that money.

1. We've never done this before, and have no one on our team who knows how to do the work. The customers hired us to spend their money without realizing we actually don't know what we're doing, so let's not tell them and let's start spending.

A highly ethical firm here. If you don't know what to do, go find someone who does. It's that simple.

Comments

DDSTOP The Saga Continues

There's been a rash of conjectures about all kinds of bad business, project, and software development (agile and traditional) management ideas of late. Time to update the Don't Do Stupid Things on Purpose (DDSTOP) post.

“There’s nothing more dangerous than an idea – or a person – that’s almost right.”

I worked a program, as VP Program Management, that had a simple goal - remove all the nuclear waste from a weapons production plant and send it to secure storage, clean up all the normal hazards and send them to their assigned disposal sites, uninstall all the infrastructure (heating ventilating, communications, and security) while 5,000 people were still working there and no one died along the way.

The program was called Cold and Dark as a description of the 100's building that had to be removed.

There was an Incident (a politically correct term for an avoidable accident) in which a fire started in an elevator shaft of a nuke building several stories below ground when foam filler was used without reading the directions on its application. The result was a safety stand-down for everyone on the site (5,000 Steel Works), including all us office workers, to get the message about health, safety, and safeguards of the materials and processes on-site.

There was a banner campaign to get the message across. In our building a banner 15 feet high and probably 40 feet long hung in our lobby high-bay.

DON'T DO STUPID THINGS ON PURPOSE (DDSTOP)

I'm reminded of that when I hear suggested processes - in less threatening environments - like those here listed from latest to earliest, mostly from #NoEstimates advocates, but there are others.

This list is a cautionary tale, to remind everyone that where there is advice in the absence of principles, processes, and practices based on those principles - ask a simple question does this advice have any evidence of being credible outside the personal anecdotes of the person providing the advice? No? don't listen. Yes? then ask for the evidence to substantiate the claim.

Extra Ordinary Claims, Require Extraordinary Evidence - Carl Sagan

Let's start from the latest to the earliest post that qualifies for Doing Stupid Things on Purpose

59 - Estimation Proponents Have Not Be Able to Show

Estimation proponents have not been able to show (they have 50+ years of data), how estimation ensures on-time delivery. The Agile community did that in less than 20 years with #ContunuousDelivery and #NoEstimates. The game is up, your estimation emperor had no clothes on!

This is one of those statements that has no basis in principle, let alone practice

Estimating and the estimate the process produces is the raw information needed to manage a project in the presence of uncertainty, to increase its probability of success. The estimate nor the estimating process can assure anything. Only the actions of management and the team increase the probability of success. The quote's author - as he always seems to do - commits a coregory error where he makes a semantic or ontological error in which things belonging to a particular category are presented as if they belong to a different category, or, alternatively, a property is ascribed to a thing that could not possibly have that.

In this example, the estimate and the estimating process can ensure on-time delivery

The estimating process cannot ensure anything. In the presence of project uncertainty, there is NEVER assurance of on-time delivery. There is only a probability of on-time delivery.

The only way on-time delivery can be ensured is if there are:

No uncertainties - reducible or irreducible

No deadlines

No mandatory features or Capabilities

No constraints on resources or facilities

No mandatory quality for produced outcomes

In the absence of this naive world, there is always the risk from uncertainties, constraints from scare resources, changing requirements, uncertain funding, inconsistent processes, and all the other elements that lower the probability of project success.

The author if this quote is trying to convince us that decisions can be made in the presence of uncertainty without estimating. This not only violates the principles of probabilistic decision making but bit also Microeconomics of Software Development and Managerial Finance of spending other people's money.

It's a claim with no basis in fact, principle, or practice.

It also appears the author didn't read the Scrum Guide as well as that Probability and Statistics book he had in High School

58 - Pressure Your Teams to Get Stuff Out the Door and Expect Them to Give You Accurate Estimates

This is one of many examples of intentional bad management, used by #NoEstimates as the basis for Not Estimating.

When management failures - in this case - utterly fail to know how to make credible estimates, the result is usually a disappointment. Conjecturing that Not estimating fixies Bad Management is more than nonsense. It's willful ignorance of good management processes.

To meet deadlines, we need two critical success factors as a start:

What are the uncertainties in meeting that deadline

What are the handling strategies for the risk to the deadline created by the uncertainties

As you know if you've been reading the blog, that uncertainty comes in two forms

Epistemic uncertainty that creates reducible risk, that can be handled with risk buydown activities

Aleatory uncertainty that creates irreducible risk, that can be handled with margin, schedule margin, cost margin, or technical margin.

It seems that those making claims about estimating or Not estimating, don't know anything about estimating.

If you want to learn about estimating, start with the Estimating Compendium at the top of this page.

57 - Stupid #NoEstimates Concepts

1. We estimate a task with a specific implementation in mind. Then we decide to do it differently after starting the work. Upon hitting the deadline we retrospectively assign that to the estimate, even if we did something completely different.

2. We estimate a project. Then proceed to work overtime, weekends and "forget" to include those in the "actuals". Later we say "we were only 20% late" and believe it!

3. Fixed scope is an illusion. Even the interpretation of the written text evolves over time (assuming you try not to change the document). Focus your project on outcomes, not scope. On-time / on scope projects can be failures too!

Here's a set of statements from a No Estimates advocates. Let's look at each one

If we estimate a specific implementation and that changes, we need a new estimate. Why are we waiting till the deadline arrives before updating the estimate? The notion of estimate to complete and estimate at completion is at the core of any closed-loop control system. No credible management system that spends other people's money in the presence of uncertainties operates this way described in this manner.

If your estimates show some credible level of confidence - say 80% - for the labor hours, why are you working overtime? The only answer is your estimate was wrong, you're poor at managing the work, you're not as efficient as needed in your estimate, or several other reducible and irreducible reasons. If the efficacy of the team is too low, then a NEW Estimate to Complete is needed. This is again the fallacy of #NoEstimates, where they ignore the core principles of a Closed Loop Control system

Where's the feedback loop?

Where are the corrective or preventive actions?

How long are you willing to late before you find out you're late? The NO Estimate advocate to these quotes claims you should produce working software every single day (it could be).

So why isn't a new assessment of the Estimate to Complete done on a similar cycle?

Why wait to find out your late (or over budget) until it's too late

Fixed scope is an illusion, but fixed Capabilities are common. That's usually defined in the contract between those paying and those providing. This is called Capabilities Based Planning. In the CBP Paradigm, the scope canbe and many times has to be, flexible, since any non-trivial project is usually developing products or services that are not in place now, and new capabilities are needed.

But someone has to document that change, authorize that change, otherwise when the money and time run out those paying will be surprised that didn't get what they thought they were paying for:

Outcomes ARE the scope - otherwise, WHY are you producing these outcomes if no one wants them?

Yes, on-time, on-budget projects can be failures. This is more common than not. But the root cause of that does NOT come from estimating by definition. Another example of the utter failure to understand the principles of Root Cause Analysis.

Risk Management is How Adults Manage Projects - Tim Lister

Managing in the presence of risk - created by reducible and irreducible uncertainty - requires making estimates, since we're operating in the presence of uncertainty.

For each risk, we need to find the Root Cause, the Condition and/or the Action that creates the risk. Then we can determine how to handle that risk. By buying it down with testing, experiments, redundancy, or other direct actions to reduce the Epistemic Uncertainties that create the risk. Or we can provide margin for the Aleatory Uncertainties that create risk. Both these activities start with making estimates of the attributes of the uncertainty - Probabilistic and Statistical.

Now is this generally applicable to all estimating conditions? NO. Start with the Value at Risk. Got a de minimis condition? Estimates provide little value. Got a risk the success of the firm - better have a robust risk management system, with it's estimating, corrective and preventive action process if you're going to survive to live another day.

56 - Past is the Basis of the Future

You never use actuals to replan. Plans about future, not past. There are no actuals for the future. The estimation problem is simply reset when new actuals come it, leading many to "re-eatimation" Another can of worms

This is one of the those what in Gawd's Green Earth is the Original Poster thinking about? The answer is he's not. Reference Class Forecasting is the core basis of all estimating process. From construction to Agile Software Development. This is one of the statements that can be debunked in under 5 minutes by a middle schooler.

and 14,300 more papers, pages, and books on the topic of Reference Class Forecasting AND Agile Software Development that can be found by a middle-schooler on Google in under 3 minutes.

The OP clearly is clueless about the simple and fundamental principles and process of estimating in the presence of uncertainty.

Go find past projects, see if they are like your project. If so, make adjustments to better match your project. Go look in NEMA, COSMIC, IFPUG databases, just as all professionals managing other people's money. Take those adjustments into consideration and behave appropriately. This is not that hard. This is well known in the domain of Enterprise IT and Software Intensive System of Systems. Perhaps those working in de minimis projects, where is No Deadline, No Not to Exceed Budget, No Mandatory Capabilities for that Time and Budget. The definition of that work is de minimis.

This is one of those blatently obvious fallacies, that can be debunked with ease.

This is one of those fully buzz-word compliant statements by a self-proclaimed agile expert who claims decisions can be made in the presence of uncertain without estimating the consequences of those decisions on the cost, schedule, and technical performance of the project.

Let's deconstruct the nonsense here, one phrase at a time and show how each is a fallacy of estimating, likely based on the lack of training or experience on managing the business processes while spending other people's money

Estimation is not an activity - Yes it is - there's a difference between an estimate as a noun and a verb

Noun - an approximate judgment or calculation, as of the value, amount, time, size, or weight of something. ... a statement of the approximate charge for work to be done, submitted by a person or business firm ready to undertake the work.

Verb - to say what you think an amount or value will be, either by guessing or by using available information to calculate it.

Estimating the cost, duration, or technical performance of a piece of software is a verb. It's an activity performed by people. The estimate as a noun is the result of the activity of estimating

(Estimating) It's a management paradigm - yes it is but is many other things as well

It's a management process. If the OP wants to call that a paradigm, I guess that's OK.

But estimates and the production of those numbers through the estimating processes are part of all credible decision-making process in business and engineering, in the presence of uncertainty

If there is no uncertainty, estimates are not needed

#NoEstimates then means #NoUncertainty

But since there is uncertainty on all project work, estimates are needed to make credible decisions when spending other people's money

Near perfect predictability

This is a perfect example of willfully ignoring what an estimate is

There is no such thing as perfect predictability in the real world when there are uncertainties

All estimates have precision and accuracy. Accuracy and precision are alike only in the fact that they both refer to the quality of measurement, but they are very different indicators of measurement.

Accuracy is the degree of closeness to true value.

Precision is the degree to which an instrument or process will repeat the same value. In other words, accuracy is the degree of veracity while precision is the degree of reproducibility.

Top-down

Estimates can be top-down, bottom-up, and both

Top-Down estimates are used to estimate the total cost of a project by using information from a previous, similar project. This is also called Reference Class Forecasting or Comparison Class Forecasting and is a method of predicting the future by looking at similar past situations and their outcomes. The theory behind reference class forecasting was developed by Daniel Kahneman and Amos Tversky that helped Kahneman in his Nobel Prize-winning work in economics [1]

Bottom-up estimates are typically made by the people doing the work, and they take part in the estimating process. This is a way to approximate an overall value by approximating values for smaller components and using the sum total of these values as the overall value. One disadvantage of bottom-up estimating is the time it takes to complete. While other forms of estimating can use the high-level requirements used to start the project process as a basis, bottom-up estimating requires low-level components. In order to take into consideration each component of the project work, these components must first be identified, through decomposition.

Both Top-Down and Bottom-Up estimates can be combined to meet the needs of those paying for the wok

Command and Control

C&C is bad management

No credible business applies command and control to developing products

C&C works well in the management of nuclear weapons - a domain I work in

The OP uses C&C is a red herring to rally the troops who have experience Dilbert Pointy-Haired bosses, but it has no business being applied in any credible development organizations

And credible estimating processes do not use C&C

Decisions without customer feedback

This is literally doing stupid things on purpose

The OP loves to make this kind of statements with no consideration of the nonsense he's spewing

Planning without doers

Why would you do this

Another example of DSTOP

I'll be very crass here, When you hear nonsense like this from the self-proclaimed expert showing up on time, on budget, with the needed capabilities without estimating anything, please know he doesn't know WTF he's talking about.

While this notion might be applicable to social or political domains, but when your spending other peoples money, that needs to be done inside the governance process. This notion that the coders can say how the business's money gets spent is the basis of the No Estimates advocates. This might be the case when there is nothing at risk - a de minimis project.

But when the Value at Risk is above the de minimis level, the cost to produce value at the needed time for the needed cost is the basis of Managerial Finance of any credible firm that intends to stay in business.

Those No Estimates advocates appear to willfully ignore an immutable principle

There is NO principle of Managerial Finance, Probabilistic Decision-Making, or Microeconomics of Software Development, where in the presence of uncertainty - both reducible and irreducible that creates risk - a credible decision can be made, using scarce resources, while spending other people's money, without Estimating the outcome of that decision and its impact on the probability of success of the project.

The ignorance this principle is the core fallacy of #NoEstimates.

53 - Book Review - No Estimates: How to Measure Project Progress without Estimating

There was a recent review of the NoEstimats book. Ignoring for the moment the utter fallacies in the notion of making decisions in the presence of uncertainty without estimating, here's my response to the question posted by the reviewer of the book

How about some clarification of the concepts you mention

Why are estimates waste?To whom? Those spending - maybe they are a waste? To those paying? You may want to go ask the CFO or the customer if they have a fiduciary needed to know how much it will cost them to receive the value that are paying you for. This is called "Microeconomics" of software development

They don’t add business value.To whom? If you show ups late and over budget the "value" of the work you're producing is reduced. Again a core principle of Microeconomics. Would you buy a pr duct or service without having some knowledge of the cost of that product or service?

A black swan could potentially wipe you out.This is a common Vasco claim. He reads a Macroeconomics book by Taleb. SW development is NOT Macro it's Micro. I sense Vasco doesn't know the difference between the two. In macro, the money is not the same "color" in that in Micro and SW development you can "stop" spending. As well the notion of a Black Swan is an Ontological uncertainty - an "unknowable" uncertainty. That is extremely rare in the software development business.

Time after time estimates have been proven wrong.

Go find the root cause of that and take corrective or corrective actions to stop doing stupid thing on purposeHere are some resources you can start withhttps://herdingcats.typepad.com/my_weblog/2019/05/software-estimating-resoruces-1.html

Then go hire someone who knows what they're doing and has done this before.Unless you're inventing new physics, someone somewhere has knowledge how to estimate the work. Having worked my self in "inventing new physics" in graduate school on a particle accelerator that's what we had post-docs and principle investigators to help us along.

What questions do estimates try to answer?

When will we be done?

What will it cost?

Will we make any money on this spend?

What's our break-even date?

Will our investors ever get their money back?

What should the "price" for this product be, so we can recover our cost at the time we need to recover the cost, so the board of directors wouldn't ask "you guys know what the hell you're doing?"

This is basic Business Management 101, another topic Vasco willfully ignores

Can you predict when a particular project will end?

Yes, there are many ways to do that. Start read how with these resources https://herdingcats.typepad.com/my_weblog/2019/05/software-estimating-resoruces-1.html

Then learn about Agile Function Point Counting and the 6 to 8 tools available for estimating agile project you can by, install and put to work. Some are free, some are expensive. NO credible non-trivial software development org DOESN'T have an estimating tool.For agile start with Troy's book and free tools https://www.amazon.com/gp/product/1466454830/ref=dbs_a_def_rwt_bibl_vppi_i1

Given the rate of progress so far, and the amount of work still left, when will the project end?

This is called Estimate to Complete and easily determined with tools built into Jira, Rally, TFS, and Version one and Troy's free excel spreadsheet.

Given the rate of progress, how much of the work can be finalized by date X?

Again the Estimate to Complete and Estimate at Completion, both standard "engineering" process that Vasco willfully ignores

The answer to this question is important because of: plans, comfort, uncertainty reduction, financial projections, sales proposals.

What to use instead for story estimation?

Function Points. References to those are in the link on estimates

Forecasts based on past data

This is reference class forecasting. Google "reference+class+forecasting+for+agile+development" to find resources

Prioritization based only on the value

Value cannot be determined without knowing the cost to produce that value. This is High School Economics, again which Vasco appears to not have taken

52 - Requirements, as traditionally used, are a separation tool. To separate the "thinkers" from the "doers". In software this is a HUGE mistake!

A system is an integrated combination of any or all of the hardware, software, facilities, personnel, data, and services necessary to perform a designated function with specified results.

Systems engineering is "the recognition and application of scientific, management, engineering, and technical skill used in the performance of system planning, research, and development with an emphasis on the technical management process. This includes the application of the requirements development process, decision analysis methods, technical assessment, configuration management, and interface management."

Systems Engineering

Is a discipline that concentrates on the design and application of the whole system as distinct from its parts. It involves looking at a problem in its entirety, taking into account all the facets and variables, and relating the social to the technical aspect.

Works as an iterative process of top-down synthesis, development, and operation

Is an interdisciplinary approach and means to enable the realization of successful systems

Focuses on defining customer needs and required functionality early in the development cycleConsiders both the business and the technical needs of all customers with the goal of providing a quality product that meets the user needs.

Systems Engineering Tasks include:

Stating the problem.

Investigating alternatives.

Modeling the system.

Integration of the system elements.

Launch the system.

Assess performance.

Reevaluate the results

The Benefits of the Systems Engineering approach:

Ensuring the effective development and delivery of the capability through the implementation of a balanced approach with respect to cost, schedule, performance, and risk by using integrated, disciplined, and consistent activities and processes regardless of the acquisition life cycle

Enabling the development of engineered resilient systems that are trusted, ensured, and easily modified (Agile)

Identifying the most effective and efficient path to deliver a capability, from identifying user needs and concepts through delivery and sustainment

Using event-driven technical reviews and audits to assess program maturity and determine the status of the technical risks associated with cost, schedule, and performance goals

Nowhere in the Systems Engineering paradigm, starting with Requirement elicitation are requirements used to separate the thinkers from the doers. Like the #Noestimates fallacy from the same poster, this too is a fallacy based on uninformed and unsubstantiated opinion.

51 - Cone of Uncertainty

The misunderstanding of the Cone of Uncertainty has come back, with the claim by a #NoEstimates advocate that the Cone of Uncertainty is Fake News and there is published data discrediting the myth.

First, the Cone of Uncertainty is a Principle used to define the needed reduction in the variances of estimates on Programs. It is NOT a post-hoc assessment of project performance, rather it is a guide to state what level of confidence will be needed at what point in the program to increase the Probability of Program Success. The Cone does NOT need data to validate the principle of reducing unc as the program progresses. That is an Immutable Principle of good project management. If your uncertainty is not reducing at some planned rate, you're not managing the project for success and you're going to be late, over budget and the products not likley to work or some combination of those.Asking for data to show the COU is valid is a fallacy, and shows lack of understanding of the principle.

When you hear I have data that shows the cone doesn't reduce ask first what's the root cause of the estimating accuracy for your projects NOT reducing as the project progresses? The person first making that claim actually had data, that he then proceeded to ignore and went on the claim the cone does reduce. If he had said the cone doesn't reduce for ME, then that would be fine. But #NoEstimates advocates picked up this canard and ran with it, claiming estimates can be done, because the uncertainty around those estimates can't be reduced. NEVER once asking why don't they reduce or reading in the IEEE articleto see there were causes for why the cone didn't reduce.

Although I don’t have definitive evidence to explain the variation in estimation accuracy I observed, I’ve identified what I believe are the primary causes:

optimistic assumptions about resource availability,

unanticipated requirements changes brought on by new market information,

Here's are some suggestions for the original claim that the cone doesn't reduce in a letter to the IEEE Computer article

These and other root causes of NOT following the reducing Cone of Uncertainty are common on many projects, from software to construction. But these possible root causes do not remove the Principle that you should manage the projects and it's estimating processes TO reduce the cone of uncertainty, so the probability of success of the project can be increased.

With the need to reduce the Cone of Uncertainty, to seek an increase in the probability of project success, here are the resources you can read - which it appears those conjecturing the Cone of Uncertainty can't be reduced - failed to read. Then you can see where they went wrong and put the Cone of Uncertainty to work for you in making informed decisions in the presence of uncertainty. Here are papers and electronic books from my library that we use on all our software intensive system of system projects, that speak to the principles and practices of the Cone of Uncertainty. Anyone claiming the cone doesn't reduce needs to be asked did you read all of these and understand what the Cone of Uncertainty is about - or are you just making this up?

Time to put an end to the nonsense of No Estimates and the Cone of Uncertainty and the continued willful ignorance of the principles, practices, and process of making decisions in the presence of uncertainty by estimating the impacts of those decisions

50 - Buffers in SW projects are "not" the same as insurance. Insurance protects you against catastrophic loss with a limited investment (premium). Buffers don't protect either buyer nor vendor against catastrophic loss (see Berline airport among others)

This is one of those claims fitting Wolfgang Pauli's criteria

This isn't right, this isn't even wrong

First buffers are the margin that protects the project from aleatory uncertainties - that is naturally occurring variances in time, cost, and technical performance. This type of uncertainty is the dominant variance on all projects. This uncertainty creates a risk that is Irreducible, hence the use of a buffer or margin to protect the deliverable from being late, over budget, or not meeting the technical requirements.

So, in fact, it is insurance in a manner that provides protection. But this protection is in the form of margin it does NOT remove, reduce, or correct the source of the risk. That source - the aleatory uncertainty - is Irreducible.

But insurance does NOT protect you from Catastrophe either. When the catastrophe occurs what insurance does is compensate you for the loss. It compensates you for the loss of your car when it is totaled, by sending you a check to buy another one. It compensates you with a check to replace your household goods when they are stolen. It compensates you with a check to replace your house when it washes away during a flood.

The second issue is that the suggestion that this No Estimates advocate never seems to understand that until you know the conditions and actions that create the undesirable outcome - the Berlin Airport being late - NO suggested fix will be effective. This is a fundamental immutable principle of Root Cause Analysis. As well any middle schooler with Google in 10 minutes can find the root causes of the Berlin Airport delays. And yes, there were bad, very bad, estimates made of the work efforts. But to suggest that NOT estimating would have brought the airport in on time is utter nonsense.

Here's a summary of the root causes for Berlin Brandenburg being late and over budget

Too many stakeholders with different interests

Failure to communicate real project status

Major changes without replanning the project

Insufficient quality control of the work

Too many fundamentals not in place - too many critical areas not properly managed

The airport was doomed to fail on day one.

Our own Denver International Airport is suffering from the same root cause. What was planned to be a security update schedule of 2021, is now 2023?

So we're back to the same thing Doing Stupid Things

49 - It is ridiculous to say "estimates (x) are ok because it is people who misuse them f(x)." Wrong! In the "Real World" how things get used & their impact (f(x)) is much more important than the things themselves (x). Things that are often misused should be dropped" #NoEstimates

Yet another example of failing to understand the role of Root Cause Analysis in improving the probability of project success. The notion that when estimates are misused, the corrective action is to NOT estimate, willfully ignoring the principles of root cause analysis, and the principles of decision making in the presence of uncertainty, i exchange for spending your customer's, money with no clue of how much will be needed, when you'll be done, and what's the probability that when you do run out of time and money, there will be anything of value to deliver.

Things that are often misused need to be corrected in their use. This is like saying I misuse my lawnmower and cut the grass too low and it burns the roots, so to correct that undesirable outcome, I'll do is stop mowing my lawn.

How about the person making this claim about dropping processes that he doesn't know how to use, take a look at how to estimate in the presence of uncertainty and how to conduct a Root Cause Analysis and then read the Resourcematerials here that will guide him in finding the conditions and actions of the problem he's' having with estimating.

So remember this

When you see or experience dysfunction, and you Don't find the Root Cause of that dysfunction, to determine the condition and/or activities that create the dysfunction. And instead, conjecture some action you believe will fix the dysfunction - you are willfully ignoring the cause of technical and business dysfunction. This is the basis of #Noestimates advocates.

To prevent or correct this problem

When mistakes occur, blame your process, not people. Apply Root Cause Analysis to find what allowed the dysfunction to occur? What will prevent them in the future? Assume people will continue to make mistakes and build fault-tolerance into your improvements.

Research shows there are five core reasons people ignore Root Cause Analysis

We just don't have time for root cause analysis - this is a common excuse where there is a fire fighting culture.

It's a Blame Culture - Root cause analysis will inevitably uncover problems in your infrastructure that are the direct result of something incorrectly done—or not done—in the first instance. When there is a culture of finger-pointing and blaming others, people may be reluctant to be involved in root cause analysis efforts for fear of being blamed for creating an error.

Lack of Organizational Will - when complex problems are difficult to resolve, management commitment and support are required.

Lack of Skills, Knowledge, and Experience in Root Cause Analysis - performing RCA requires skills and experience. Both of which can be acquired with ease.

Lack of Detail and Missing Data - this can be fixed by applying the Apollo Method in the Root Cause Analysis link above, to find the conditions and activities that create the dysfunction

One last thought can be found in this blog post I Smell Dysfunction, motivated by the original post from #Noestimates advocates and countered by the advice of the Apollo method author.

48 - In the realm of projects, heuristics are good enough. We don't need more detailed info. We need concrete, actionable info to feed the feedback loop if decision making in e.g. Scope Management. It's people's decisions that lead to on-time delivery, not models

Abstract models reduce the complexity of the real world to digestible chunks that are simpler to understand. This claim fails to understand the role of the model.

While abstract models are just representations, omitting some aspects of the real-world system, they do so temporarily. But they map what we hope to understand into a form that we can understand. Different types of models answer different types of questions about the system they represent, but even if we build many models, they can never answer every possible question about the system. That can only be done by the final system itself.

Both people AND models are needed for success

47 - If you fund projects that are fully specified months ahead (if not years) of their starting date, then you cannot be Agile (no adaptability)

First funding is not budget. Budget is what you've been authorized to spend. Funding is the authorization to spend that budget. The author of this statement appears to not understand the difference. This is a core Managerial Finance principle.

Next is the fallacious notion that Funding somehow locks in the needed Capabilities and prevents agility. This is not true. Agility in delivering the needed Capabilities at the needed time for the need BUDGET is the core basis of Agile. This is one of the Deepty statements that sound important, but when examined in the light of Managerial Finance, Microeconomics of SW development and Probabilistic decision making is a fallacy.

46 - If you measure projects on how well they fake progress (milestone review via powerpoint), spend money and make time passed look like "actuals" (remember the fake milestones), then you get projects that are perfect at spending money and faking progress. Surprised?

Another good example of DSTOP. Why would you fake progress (I know it happens) and call yourself an honest person? The author then makes the unsubstantiated claim that NOT Estimating will fix this problem. Estimating or not estimating hs nothing to do with people behaving badly or in our domain, people behaving illegally in accordance with the business governance process.

45 - Options are the key to agility: more options, more ability to adapt (exercise another option.) Plan-Drive approaches are designed to remove options, therefore control

This is one of those claims that is half right and half wrong, making the claims fully wrong.

Options are needed when managing in the presence of uncertainty. Without options, the risks created by uncertainty - reducible risk and irreducible risk are going to negatively impact the project.

The notion that plan-driven approaches are designed to remove options is simply uninformed ignorance of how planning works.

The very basis of planning - good planning - is to identify options when the uncertainties that create risk come true. The phrase what's Plan-B comes from this principle.

Managing in the presence of uncertainty mandates we have alternatives to the plan since those uncertainties have specific probabilities of coming true in disrupting the current Plan.

The author of this quote is anti-estimates, anti-plans, anti-management, and appears to be anti-doing his homework on how to manage in the presence of uncertainty.

Let's start with some resources for how to manage in the presence of uncertainty, when those paying us need for us to produce needed capabilities, at a needed time, and a needed cost. Here's a quick overview of how to manage in the presence of uncertainty

So let's address the issue in the quote that Plans remove Options. First a correction to the quote

The freedom to choose is the underlying principle of agile development. The term Option in the software development world is Real Option. Real Options is about deferring decisions to the last responsible moment, which is an explicit principle of agile development. By avoiding early commitments, flexibility is gained in the choices to be made later.

Real Options is an approach that allows people to make optimal decisions within their current context. This may sound difficult, but in essence, it is a different view on how we deal with making decisions. There are two aspects to Real Options, mathematics, and psychology. The mathematics of Real Options, which is based on Financial Option Theory, provides us an optimal decision process. The psychology of uncertainty and decision making tells us that we don't always follow the optimal processes and make irrational decisions at times.

So how does planing interact with Real Options? And is the claim Plan-Drive approaches are designed to remove options a fallacy, worse yet simply bogus?

Some choices fall under the title real options:

Depending on the real option value (ROV), that is the value of the option there are choices to expand the project in some new direction, contract the project from one direction, or do both expand and contract. This is the basis of flexibility in the execution of the project. The very heart of Agile is the ability to make changes in direction in the presence of emerging conditions, that is

Initiate a change

Delay a change

Abandon a direction

Plan a new direction

This notion of planning a new direction assumes we have a plan now - a plan that will be changed as the result of new information that allows us to exercise our real option.

Planning is about exercising options

Software development is an investment activity. Managing this investment activity in the presence of uncertainty means make trade-offs for the options that appear from uncertainty. Uncertainty puts a premium on flexibility to change products and plans, but flexibility also incurs costs. Developers can change products and plans as new information comes to light [1]. This is the basis of agile.

The result of applying Real Options to software development is simple

We don't have A Plan, we have Plans (plural) for getting to where we need to go.

The original poster has failed (yet again) to read the literature and understand that the words he's using are incorrect. Google will find everything you need, with the simple phrase "real options" and "planning" and "software development." Don't listen to anyone who hasn't done his homework on a subject. Those in the #Noestimates community are notorious for not doing their homework and this is a prime example.

Finally ...

Real Options is about using estimates to make decisions in the presence of uncertainty, by assessing the options and the impact of those options of the possible choices to deliver the best value for the investment.

So the OP's objections to estimating is also a fallacy when faced with the uncertainties of making decisions based on options.

"Real Options "in" Projects and Systems Design - Identification of Options and Solution for Path Dependency," by Tao Wang, Submitted to the Engineering Systems Division On May 17th, 2005 in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Engineering Systems.

44 - If #Agile has taught us something it is that uncertainty can never be reduced "intellectually" (e.g. Through estimation BDUF or Plan-Driven approaches), only through "ACTION" and empirical observation

This is one of the Deepity statements where the words sound important but are nonsense.

First, what does it mean to intellectually reduce uncertainty?

Uncertainty is a tangible measure of the probability that what you expect isn't going to happen

There is no intellectual aspect to uncertainty

There IS a mathematical aspect

Uncertainty comes in two forms

Epistemic - is uncertainty that comes from the lack of knowledge. This lack of knowledge comes from many sources. Inadequate understanding of the underlying processes, incomplete knowledge of the phenomena, or imprecise evaluation of the related characteristics are common sources of epistemic uncertainty. In other words, we don't know how this thing works so there is uncertainty about its operation.

Aleatory - is uncertainty that comes from a random process. Flipping a coin and predicting either HEADS or TAILS is aleatory uncertainty. In other words, the uncertainty we are observing is random, it is part of the natural processes of what we are observing.

All Observations are Empirical

Empirical evidence is information acquired by observation or experimentation. The process is a central part of the scientific method.

Release Plans and Product Roadmaps in Agile are Plans

Planning is part of all agile processes

The Backlog - Feature Backlogs or Story Backlogs are plans for what is going to be done in the next Sprint or Release

BDUF is a strawman for Bad Management

The author of the OP and other #NoEstimates like to use Big Design Up Front as the stalking horse for agile.

This process is forbidden in our complex software-intensive system of systems domain.

If he sees BDUF, those developers are Doing Stupid Things on Purpose

Reference Posts on the topic of Uncertainty, Risk, and Estimating in the Presence of Uncertainty

The orginal poster apepars to not have done his homework in High School probability and statistcs class and doesn't understand the differences between Aleatory and Epistemic. Don't do the same, learn about estimating and resutling risks in the following resources.

43 - Understand one thing before jumping into #NoEstimates debate: product and software development for *real* use is a DISCOVERY problem, not a delivery problem. Estimation focus on delivery, not DISCOVERY

All product and software development operates in the presence of uncertainties. Either Aleatory uncertainties that result from the naturally occurring variances in the work. Driving from my home to the airport to commute to the job site hs aleatory uncertainty. Google says it 47.1 miles and will take 51 minutes. Rarely if ever has it taken exactly 51 minutes. No person would plan the airport trip based on that number. Your chances of being late are high. I always allow 2 hours, since I have a special parking spot, that I have a very high probability of finding, where I can walk straight into the check-in counter, drop my bag and get in the Clear line, board the train and go to the gate. The travel time to the airport can have naturally occurring variability and it can have a probabilistic event uncertainty. The speed limit is 75MPH, but sometimes the traffic is slower. This is a naturally occurring variability. There is also a probability there will be road construction, a traffic stop that slows down all the other traffic, the start of a snowstorm or other probabilistic events.

Duration in software development productivity has Aleatory Uncertainty. If you've concluded that it will take Exactly 18 hours to develop a Story, you'll be disappointed when you encounter some delay or even some way to develop the Story faster.

Aleatory uncertainties create irreducible risks. These risks can only be handled with margin. Schedule margin, cost margin, and technical margin.

The second type of uncertainty of product and software development is Epistemic Uncertainty. This uncertainty creates a risk that has a probability of occurrence.

So let's deconstruct the statement in light of these uncertainties that creates risk

If your developing products and software development that is NOT for real use, either you're a student doing your homework, or you're wasting your customer's money

This is one of the deepity statements many in the agile community like to make

In the presence of uncertainty, all development is a discovery problem

This means the Capabilities needed by the customer or product manager are defined in terms that may change as the software is developed

The notion that requirements are fixed and immutable is nonsense, even in formal acquisition processes, this is the role of Change Control.

If there are no uncertainties (epistemic or aleatory) then either this project operates in a realm unheard of in the Real-world or, the project is de minimis so what variances that impact the project are so low they have no impact on the probability of success

So,

To deliver we need to know something about the uncertainties on the project - reducible (Epistemic) and irreducible (Aleatory).

To deliver when those paying need to put your work to use, they need some sense of when it will be available for use.

For those paying for your work, they also need to know something about the cost of your effort, since Value cannot be determined without knowing the cost to produce that Value, AND when that Value will be arriving.

To deliver the needed Capabilities (and their Features and Stories), for a needed Cost (to meet the Value equation), at the needed time (to meet the start of revenue generation and fulfill the time cost of money) for the capital used to develop the product and if there are uncertainties (reducible and irreducible) that create risk, you're going to have to estimate the uncertainties, their impacts on the probability of success for the cost, schedule, and technical performance of your product.

This is an immutable principle of Microeconomics of Software development, Managerial Finance, and Probabilistic Decision Theory.

The #NoEstimates advocates appear to willfully ignore these principles and the practices of making risk-informed decisions in the presence of the uncertainties encountered on ALL software development projects. Ignore their advice, since they failed to understand ...

Risk Management is How Adults Manage Projects - Tim Lister

42 - Estimation is a dysfunctional illusion. You can't know how long things take. *BUDGET* investments & use #ContinuousDelivery to be collecting feedback as you go. It is IRRESPONSIBLE to make an investment based only on an estimate (60% delays are common)

Let's deconstruct this conjecture:

Estimation is a dysfunctional illusion - for someone like the OP it may be, since he appears to be wholly uninformed how software estimates are made, used, and validated. Of course, here's a starting point. Agile Estimating. Now if you don't read how to estimate, don't practice the advice in these papers, look into the tools for estimating software, then you have NO basis on which to claim estimation is a dysfunctional illusion. It's just unsubstantiated opinion.

You can't tell how long it will take. Another unsubstantiated opinion. If you don't know what Done looks like in some meaningful units of measure, you probably can't tell how long it takes. This means

You have no product road map

You have no release plan

You have no experience developing software in this domain

You pretty much know nothing about the problem or the solution

Budget investments and use continuous delivery - delivery of what? Any road map

Any uncertainties in that delivery process?

Any variabilities in the productivity of the development team?

Any changes in the needed capabilities?

Any reducible risks?

Any irreducible risks

If NO, then just code anyway and deliver code

If YES, you'd better be estimating

It is irresponsible to make an investment based only on an estimate - yes. NO ONE does that. At least no credible business manager does. This is the classic tautology of the OP posted as somehow new information. This is called deepity is a proposition that seems to be profound because it is logically ill-formed. It has (at least) two readings and balances precariously between them. On one reading it is true but trivial. And on another reading it is false, but would be earth-shattering if true.

60% delay is common - WHY. No Root Cause is ever provided by the OP for these claims. Without the root cause (forgetting the number is bogus) then any suggestion is bogus.

So, in the end, it comes down to this

There is no principle by which you can make a decision in the presence of uncertainty (reducible - Epistemic or irreducible - Aleatory) for a non-trivial spend of other people's money without making estimates of the impacts of that decision. Any claim that you can willfully ignores the principles of managerial finance, microeconomics of software development decision making, and probabilistic managment control of the business.

When you hear you can, that person is willfully ignoring these principles and is a prime example of Doing Stupid Things on Purpose.

Some in the agile community that have drunk the Koolaid of #NoEstimates firmly believe that estimates are waste, that decisions can be made in the presence of uncertainty without estimates, that focusing on producing Value with no consideration for the cost to produce that value, when that value will arrive, Measures of Effectiveness and Measures of Performance for that Value, or measures of anything used in a closed-loop control systems to manage the work effort in the presence of uncertainty is needed.

Just start coding and the customer will tell them when to stop. Here's an EDS video from the same advertising firm the inspired me to name my Blog Herding cats.

A #Noestimates advocate claims that having a ±10% accuracy for estimates of cost and duration is a dangerous thing. With what appears to be NO understanding of how to estimate, this author ignores the processes used in developing products or services in the presence of uncertainty.

In our software-intensive system of systems domain, we develop proposals with 80% confidence of completing on time and on budget at the time of submission. The ±10% value has no context (as usual), but that range is certainly possible using the processes of probabilistic modeling of the project.

Here's how:

Build a model of the needed Capabilities

Define Reference Classes for those Capabilities and the Features that implement them. We develop these reference classes using Agile Function Points.

There are databases for Function Points

Use these to develop a Systems Model of the products

Define the probabilistic ranges of the work in a single point estimate manner

This means defined the Most Likely value for the range of duration or cost for the item

Define the upper and lower bounds of this Most Likely value

Define the Probability Distribution Function for this range. Use the Triangle Distribution when there is no past performance data

Define the dependencies between the Capabilities and the Features in some form

If not these approaches, some process of making the connections between the Capabilities, the Features, and the outcome visible. For any Agile development tools (Rally, JIRA, Team Foundation Server) have embedded tools for making these charts.

Define the risks - reducible and irreducible - to each Capability and their Features

For each risk define the probability of occurrence, the probability of impact, the probabilities of duration or cost impacts from that impact, the probability of success for the corrective or preventive actions, and the probability of any residual risk

Place all the information into some modeling tool

If you don't have one, ask this critically important question

What is the Value at Risk for your Project?

This is the core question for any discussion of the need for, the value of, or outcome of estimating. If your Value at Risk is low, then all this is not likely to be of concern.

But without the answer to that Value at Risk question any suggestion to do anything is baseless, since there is no consideration for the impact of the suggestion to Not do something

So how do we get ±10% accuracy?

Apply margin for the irreducible uncertainties that create the risk.

Work performed using the budget for the Reducible uncertainties that create the risk.

Both of these actions cost money. You can spend money to buy done risk and that buy down outcome reduces that variances in the cost and schedule and increases the probability of projects success for that cost and schedule.

This is a closed loop system optimization process applied to all the projects we work.

With this process, you can get a ±10% range for any estimate. Is this normal? That's a different question. 80% confidence of on or before and at or below is our norm. But the firm where the ±10% range was needed may well have a need to control the Value at Risk for the project.

39 - You Don't Need to Know What Done Looks Like, Just Have a Small Plan to the Next Point

There's an ongoing notion in the agile domain that we don't need or even want a Plan that shows us what done looks like. That the agile team is exploring new territory and the plans (maps in an analogy) are of little use.

It is about the inadequacy of maps when you are navigating *new* territory. There's only so much we can predict up front.

A nice platitude, but platitudes don't put money in the bank. And money in the bank is what software development is about. If you're navigating new territory without some kind of map, you're lost and are unwilling to admit it.

It's straight forward to construct a map of the territory:

What capabilities do the customers need in order to spend money on the product or service?

If you don't know this at some high level, do not spend a dime.

If you can't state one or two needed capabilities a customer would be willing to pay money for, you're unqualified to be spending firms money.

If you don't anything about what customers might pay money for, then you're in the wrong business.

With this short list of capabilities, what problems would they solve for those willing to pay for them?

In your experience, what might the effort and time needed to produce one of the capabilities?

Don't know the answer to that, those paying you should go find someone who does.

The notion of the inadequacy of maps is really a statement about the inadequacy of YOU to build such a simple, first cut, top-level map. This is likely the situation the Original Poster is in.

One analogy of this condition of the Watchmaker and the Gardner. [1], [2]

When a system is bounded with relatively static, well-understood requirements, classical methods of development are applicable. At the other end of the spectrum, when systems are not well defined and each is individually reacting to technology and mission changes, the environment for any given system becomes essentially unpredictable.

The metaphor of the watchmaker and gardener is useful to describe the differences between development in the two types of environments. Traditional software development processes are like watchmaking. Its processes, techniques, and tools are applicable to difficult problems that are essentially deterministic or reductionist.

Like gardening, development of software products draws on the fundamental principles of evolution, ecology, and adaptation. It uses techniques to increase the likelihood of desirable or favorable outcomes in environments characterized by uncertainty and that may change in unpredictable ways. This approach to development is not a replacement for classical development. It is a method to get started. Both disciplines must be used in combination to achieve success.

Now to the Next Step

If you're spending your customers money and you AND your customer don't have some shared sense of what direction you're going toward - some goal, the pace you are making toward that goal (velocity, by the way, is not a single number. Velocity is a Vector. It has Direction and Speed). And if you don't know how long it's going to take - to some agreed upon accuracy and precision - to get to a stopping point, then that customer is spending her hard earned money, with no idea of what done looks like.

This is called a Death March project. And those suggesting that Plans are not needed beyond the next Sprint are on a Death March by their own choice.Another DSTOP.

This might be the case in some science experiments. But even that is nonsense. Our son is a scientist, with funding from outside sources, and when they provide him with money they have an expectation of a deliverable at the end of the period of performance.

This is an example of someone making a claim with little or no understanding of how business works, how those paying for value manage their money in the presence of uncertainty. Another example of DSTOP, in this case Doing Things with NO understanding of Managerial Finance, Microeconomics of Software Development, or Probabilistic Decision Making. If the OP would have simply Googled the topic he too would have found materials we use in our domain to manage software development projects in the presence of emerging uncertainty, starting with little certainty as to the needed capabilities.

References

When ever you hear a conjecture, the first thing to do is go to Google and start exploring to see if that conjecture is credible if there are already answers to the supposed unanswered questions, and to learn for yourself if the person making the conjecture has any credibility. Here's a quick - under 5 minute - sample on how to make plans in the presence of uncertainty.

[3] "A Framework for Understanding Uncertainty and its Mitigation and Exploitation in Complex Systems," Dr. Hugh McManus and Prof. Daniel Hastings, Fifteenth Annual International Symposium of the International Council on Systems Engineering, 10-15 July 2005

38 - Failure to Understand Planning in Presence of Uncertainty

The moment you stop believing you can predict the future, waterfall planning approaches make no sense

When managing software development in the presence of uncertainty, precision and accuracy are variables that must be defined before any estimates are made. This is independent of the software development processes - be it agile or traditional.

Waterfall is a term no longer used in the domain I work. It's also a code word for bad project management. Iterative and incremental are standard practices from software development of large construction (Lean Construction). The use of Waterfall is a dog whistle for those willfully ignoring the principles of managing anything in the presence of uncertainty. In this case people have been convinced that estimates are not needed by those paying them for their work.

37 - Judgment from Experience Requires Repeatability

Judgment from experience requires repeatability (experts work best in Complicated or Ordered cynefin domains)The moment you have little-to-no repeatability experts are at best useless, adaptability is a better survival strategy.

If the work is non-repeatable, the expert's experience is absolutely necessary for a simple reason.

If you're not an expert, you're not going to recognize the possible solutions, risks, impediments, and opportunities for the problems you'll encounter in developing a solution that has never been developed before. This is why we hire experienced experts, they keep us out of trouble. To be very crass, if you're not an expert, you're very likely to not know WTF your doing.Seems the author of the quote fits that description.

This knowledge starts with Reference Class Forecasting which is a method of predicting the future (cost, schedule, technical performance) by looking at similar past situations and their outcomes. Reference class forecasting predicts the outcome of an action based on actual outcomes in a reference class of similar actions to that being forecast.

Where do you find these Reference Classes?

For cost and schedule, there are databases containing 1,000's of past projects.

For technical design, there are existing designs, patterns, packages, architecture references, and similar resources. Yes, you have to pay for them, but that cost is cheap compared to a naive and novice approach based on experimenting with other people's money.

In all engineering worlds, from software engineering to bending metal for money, there is really nothing new under the sun. If it is truly new and never before seen, then it's called a science experiment. Rarely are software engineers working on science experiments. And even if it a science experiment (both our children work in the science world) where the very first thing you do is a literature search to see what other people have done in your field around the question you are trying to answer.

In our domain of aerospace and defense, we have reference class for cost, schedule, architecture (DODAF), risk and other attributes of every system ever built in the DOD (CAPE, CADE) and NASA (CARDe/Once).

Your domain may have NO reference classes. This is why you should hire someone who's done this before and ignore those you state the fallacy that...

Judgment from experience requires repeatability. The moment you have little-to-no repeatability experts are at best useless, adaptability is a better survival strategy.

Adaptability is of little use if you don't know the boundaries of the technology, processes, and uncertainties of the problem. If you don't know, it's a wonderful way to spend the money of those paying you to learn (the hard way). So check with them first if they're willing to fund your education to learn how to solve problems in a domain you have no expertise in when you could have simply hired someone who does.

As another observation, Cynefin tells us four quadrants of systems. But it tells us NOTHING about how to stay out of those quadrants, what actions are needed to move from one quadrant to the other. As a Systems Engineer, it's an interesting notion but has NO actionable outcomes other than the making of an observation.

It's like standing on the dock watching the ships passing by, when what is needed is to be on the bridge at the helm of the ship making sure it leaves the harbor safely. Take a look at "Complexity Primer for Systems Engineers" as a start. And then some more resources for managing in the presence of complexity

36. The moment you have little-to-no repeatability experts are at best useless

Yet another class #NoEstimates lack of understanding of estimating, reference class forecasting, reference class databases, parametric estimating, estimating tools, and other well established estimating processes in the agile software development world.

Judgment from experience requires repeatability (experts work best in Complicated or Ordered Cynefin domains) The moment you have little-to-no repeatability experts are at best useless, adaptability is a better survival strategy. I couldn't see this in the text.

35. It is in doing the work that we discover what work we must do

This is the classic #Noestimate vision of how software is developed. But what it really says is

I don't have a clue what done looks like, so I'll just start spending my customer's money and she'll tell me what to do, when to do it, when to stop doing it

Or the companion quote from an Agile thought leader

Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before. If we had done it before, we’d just give it to you.

That way, you can start ignoring the advice of those Thought Leaders when they say I don't really know, but here's how I'd start and replace that with I have some notion of what this software needs to do, let' me go look at some reference database, build my ConOps model and get a first-order estimate and then we can refine that with more input.

So here's the way out of this dilemma of not knowing what to do with the only solution of start coding. This solution is called Software Engineering. Let's start with the obvious...

Hire someone who does

Unless you're inventing new physics, there is someone, somewhere that has some clue about the effort, duration, and cost to develop what you what.

I've worked on inventing new physics projects and in our proposal to the USAF Office of Scientific Research, they asked for an estimate of the needed funds, and what might we find at the end of those funds in exchange for their investment.

The answer to the question is to build a parametric model of the system's highest programmatic and technical architecture and ask simple straightforward questions about the rough order of magnitude estimates, of cost, schedule, and technical performance.

So even the inventing new physics excuse is lame.

Subscribe to one of several reference class estimating sites

NESMA - an independent international organization focused on software metrics and software measurement.

COSMIC - is a voluntary, worldwide grouping of software metrics experts. Started in 1998, COSMIC has developed the most advanced method of measuring a functional size of software. Such sizes are important as measures of software project work-output and for estimating project effort.

Develop a Concept of Operations and use Function Point Analysis to develop an estimate

From that ConOps, decomposed the elements using Functional Point Analysis, place those elements in an IFPUG tool and see what it produces

Use one or all the reference class sits above to calibrate your FP model

34. At the #NoProjects #NoEstimates workshop it was mentioned that focus on "on-time, on-budget naturally leads the project to meet those goals and forget the value/benefits side of the equation

We have a quote we use when we're called onto a program that is DSTOP, that goes like this...

What's the difference between this Program and the Boy Scouts? The Boy Scouts have adult supervision

This is yet another example of Doing Stupid Things on Purpose. If we take Tim Lister's advice about managing risk while spending other people's money in the presence of uncertainty, where is the adult supervision here?

33. The First Step in Any Project is to grossly Underestimate its Complexity and Difficulty and its companion from the same author Finishing a project "on budget and on time" is not an indication of success. It is merely the result of gaming and easily gamed system.

These are classic examples from an author who is either unskilled, untrained, and inexperienced in estimating software development. Or who willfully ignores the knowledge and resources readily available in textbooks, papers, tools, and training for how to create credible estimates for software systems. I suspect the latter.

What these quotes actually say is I have no intention of learning how to estimate cost, schedule, and technical performance because I don't want to. My customer doesn't care what I do. And, my customer is equally as clueless about the need to estimate as I am.

I know it's your money, but I'm going to ignore that and behave as if it's my money and I'll any damned thing with it I want, you can tell me how to spend it.

This sums pretty much sums up the basis for the #NoEstimates argument.

32. There are always uncertainties, that's why estimation does not work. We have prioritized work, not deadlines. We measure the outcome of value, not time.

This is one of those patently false conjectures. It is exactly because of uncertainties that estimates are needed.

If you're working on a software project with NO deadlines, then you're a very lucky person. What that means is those paying you have no need to recoup their investment on any timeline. Just keep spending until told to stop spending. Prioritization then becomes the order in which those paying you want you to work on. They don't care about cost, risk, schedule, the probability that they'll get what they paid you for. You just labor and someone else outside your domain will be making the decisions.

But an open question remains. How do you prioritize the work in the presence of uncertainty about its Value, Cost, Effort, and Duration to produce that Value, without making an estimate?

31. Read the NoEstimatesBook and come to the workshop to see how I boldly claim that it can save your company millions.

Save Millions of Dollars? Does that make sense to anyone who has worked in software development? Probably not.

Exactly how these savings are achieved are not actually stated in the book. This is the claim from the book

Let’s say you approach a traditional consultancy business and tell them “hey, I can show you how to turn that 8 year - 500 people - 250 million euro project for your customer into a 1 year - 10 people - 0.9 million euro Agile project.”

OK, let's say you can. Any evidence that NOT Estimating is the corrective or preventive action is the cause that enables the claimed outcome (other than dropping scope to meet the deadline and cost target). Any project of that size can likely save a few million by just simple effectiveness and efficiency improvements.

But the author claims:

The duration can be reduced from 8 years to 1 year, that's an -92% decrease in the duration of the project. For the same scope? Didn't say. By simply NOT Estimating? Show how that is done

The cost reduced from 250 Million € to 0.9 Million € a 99.64% reduction in the cost. For the same scope? Didn't say?

The headcount reduction of 500 people to 10 people, a 98% reduction in absorbed labor. All by NOT Estimating? With no evidence of how to do that by the way.

There's one way to do this. Don't deliver 98% of what is needed. Then claim Not Estimating fixed the problems.

This is a continuous claim by #NoEstimates advocates that has not one piece of credible evidence to back it up. No case study, no data, no description of what was the source of these savings other than NOT Estimating, no description of the processes, other than Not Estimating, no Root Cause of the core problems with the project in the first place, that when corrected or prevented would have removed cost, schedule, and headcount impacts. No nothing

As a physicist, there is a quote we learned in graduate school when someone comes in the makes an outrageous claim - a cockamamie claim actual

This isn't right. This isn't even wrong

The phrase "isn't even wrong" describes an argument or explanation that purports to be credible but is based on invalid reasoning or speculative premises that can neither be proven correct nor falsified. Hence, it refers to statements that cannot be discussed in a rigorous sense. [1] For a meaningful discussion on whether a certain statement is true or false, the statement must satisfy the criterion called "falsifiability" — the inherent possibility for the statement to be tested and found false. In this sense, the phrase "not even wrong" is synonymous to "nonfalsifiable". [1]

The phrase is generally attributed to theoretical physicist Wolfgang Pauli, who was known for his colorful objections to incorrect or careless thinking. [2][3] Rudolf Peierls documents an instance in which "a friend showed Pauli the paper of a young physicist which he suspected was not of great value but on which he wanted Pauli's views. Pauli remarked sadly, 'It is not even wrong'."[4] This is also often quoted as "That is not only not right; it is not even wrong", or in Pauli's native German,

"Das ist nicht nur nicht richtig; es ist nicht einmal falsch!".

Peierls remarks that quite a few apocryphal stories of this kind have been circulated and mentions that he listed only the ones personally vouched for by him. He also quotes another example when Pauli replied to Lev Landau, "What you said was so confused that one could not tell whether it was nonsense or not." [4]

This is now the Archetype of the #Noestimates arguments.

#NoEstimates is not falsifiable

The conjectures cannot be discussed in any rigorous sense since there are no principles by which decisions can be made in the presence of uncertainty without estimating the outcomes and impacts of those decisions. Do suggest there is violates the principles of Microeconomics, human decision making in the presence of scarce resources, Managerial Finance, and Probabilistic Decision making.

The conjecture cannot be tested to be True or False

In any field based on principles (Microeconomics of software development is such an example) and the practices and processes of applying those principles, there are two distinct ways of not being correct:

For something to be wrong it must have some grounding in reality and must follow some legitimate string of logic. It may be wrong, but it had the theoretical possibility of being right.

To be “not even wrong,” something must be so far off from reality or contain such a glaring logical flaw that it at no point would reasonably be considered correct.

#NoEstimates is the latter. The claim of #NoEstimates is not an error in logic or reasoning, it's pseudoscience, designed to play to the fears of software developers that when they provide an estimate it will be misused and abused by their Bad managers, so the fallacy is let's not make estimates and the outcomes of the project will be acceptable to those paying for the work.

This post goes along with the fallacy posted prior - Risk is not there to be mitigated, it's there to be eliminated. And my other favorite from the same author Estimates leads to buffers. Buffers lead to waste. Waste leads to ruin.

Uncertainty comes in two forms

Reducible (Epistemic, the study of knowledge)

Irreducible (Aleatory, Latin for a single die)

There are four kinds of reducible (Epistemic) uncertainties that create a risk to software development projects

Reducible Cost Risk - is often associated with unidentified reducible Technical risks, changes in technical requirements and their propagation that impacts cost.

Reducible Schedule Risk - Schedule Risk Analysis (SRA)(Monte Carlo Simulation, Method of Moments for example) is an effective technique to connect the risk information of project activities to the baseline schedule, to provide information on the sensitivity of individual project activities to assess the potential impact of uncertainty on the final project duration and cost.

Reducible Technical Risk - is the impact on a project, system, or entire infrastructure when the outcomes from engineering development do not work as expected, do not provide the needed technical performance, or create higher than the planned risk to the performance of the system.

Reducible Cost Estimating Risk - is dependent on technical, schedule, and programmatic risks, which must be assessed to provide an accurate picture of the project cost. Cost risk estimating assessment addresses the cost, schedule, and technical risks that impact the cost estimate.

There are three kinds of irreducible (Aleatory) uncertainties that create the risk to software development projects

Irreducible Schedule Risk

Projects are over budget and behind schedule, to some extent because uncertainties are not accounted for in schedule estimates. Research and practice are now addressing this problem, often by using Monte Carlo methods to simulate the effect of variances in work package costs and durations on total cost and date of completion. However, many such project risk approaches ignore the significant impact of probabilistic correlation on work package cost and duration predictions.

Irreducible schedule risk is handled with Schedule Margin which is defined as the amount of added time needed to achieve a significant event with an acceptable probability of success. 8 Significant events are major contractual milestones or deliverables.

Irreducible Cost Risk

Irreducible cost risk is handled by Management Reserve and Cost Contingency are program cost elements related to program risks and are an integral part of the program's cost estimate. Cost Contingency addresses the Ontological Uncertainties of the program. The Confidence Levels for the Management Reserve and Cost Contingency are based on the program's risk assumptions, program complexity, program size, and program criticality.

When estimating the cost of work, that resulting cost number is a random variable. Point estimates of cost have little value in the presence of uncertainty. The planned unit cost of a deliverable is rarely the actual cost of that item. Covering the variance in the cost of goods may or may not be appropriate for Management Reserve.

Irreducible Technical Risk

If we use the definition of Margin as the difference between the maximum possible value and the maximum expected Value and separate that from contingency is the difference between the current best estimates and maximum expected estimate, then for the systems under development, the technical resources and the technical performance values carry both margin and contingency.

So managing in the presence of uncertainty and the risk it creates mandates making estimates

Estimates of how much margin is needed to protect the project from the irreducible uncertainties (aleatory)

This margin is usually calculated using some form of simulation, or a reference class

For Epistemic uncertainty that creates reducible risks, a specific risk buydown process is needed to remove the undesirable comes.

Buy two in case one breaks

Build the system with a 20% performance buffer to handle the unanticipated workload

Build a fault-tolerant and fail-safe system behavior (this was my specialty many years ago)

The author of those quotes loves to reference macroeconomics texts, ignoring the fact that software development is microeconomics. It sounds impressive when he does this, but it's not applicable to making decisions in the presence of uncertainty while writing software and using other people's money for markets where the buyer are making decisions on the technical value received in exchange for the cost.

29. Only through a survival heuristic like Kelly Criteria. You survive uncertainty, you don't remove it.

I've split out the second part of the post above to address this fallacy directly

The Kelly Criteria is a gambling paradigm for knowing how much to bet.

This can be applied to investing in a portfolio or applied in a casino.

If you think writing software for money is like throwing the dice in Las Vegas, then stop reading right here.

Those financial instruments you're investing in have externalities driving them.

The global and national market, the behaviors of the firm

The competition

The financial management system in terms of financing rate, cost of money, the debt market.

Software development projects have some externalities driving them - ontological uncertainties - but it would be a very naive risk manager who let those externalities control the project

A risk management plan defines the reducible and irreducible uncertainties that create a risk

A risk handling plan defines these as:

Mitigation

Avoidance

Transfer

Avoid

The original poster uses macroeconomics terms from a couple of books he's read by a controversial author from the bond trading domain and applied them to software development projects.

This is an Unequivocal fallacy, used often by those wanting the established principles to not be applicable to their domain.

28. Agile is about responding to change, over following a plan. We must recognize Estimates endanger that goal

How estimate endanger the goal of responding to change is not stated. As well responding to change over following the plan is an example of not knowing what Planning is about.

Plans are Strategies. Strategies are Hypotheses. Hypotheses require empirical data to test the hypothesis. This is basic high school scientific method stuff, that appears to willfully ignored by the writers of that Agile Manifesto Phrase. If you don't have a plan in some form, with tests of the progress to plan needed to take corrective or preventive actions, then you're on a Death Marchproject. If you're spending other people's money, you won't be for long, because they will - or should - fire you for being incompetent as the steward of their funds.

Since making decisions in the presence of uncertainty requires buying knowledge about the possible outcomes of our decision, we need to have knowledge of both reducible and irreducible risk created by uncertainty. Reducible risk can be bought. Irreducible risk can only be protected against with margin.

If you don't have a Plan, you don't know what Done looks like in any way meaningful to those paying for your work. In that case, you're on a Death March project, whose only stopping condition is when you run out of money or time.

This is the purpose of the Product Roadmap and Release Plan.

It can as simple as a list of needed Features on sticky notes on the wall.

It can be a complex as a scrum or scrums strategy map in SAFe 4.2 in Rally.

But with some visible picture of what Done looks like, no one writing code knows why or what they doing that work for. They're just spending the customer's money for no defined reason.

Plans are strategies

Strategies are hypotheses.

Hypotheses require tests to confirm they are valid (just like you learned in your High School science class.

Agile provides the tests to the hypothesis through working software.

All projects operate in the presence of uncertainty, reducible and irreducible.

The corrective and preventive actions needed to address the risk produced by those uncertainties are in the Plan, otherwise, when they occur you're caught flatfooted with no plan, and those paying you, have doubt you can deliver what you said you were in exchange for your paycheck.

Managing in the presence of uncertainty requires making estimates since the future is uncertain.

Since risk management is how adults manage projects - Tim Lister

#NoEstimates means the obvious alternative to managing as an adult.

27. How to (write software with agile) (1) Define the most important thing. (2) Work ONLY on that until finished. (3) Repeat

This is a good way to work in priority order, but tells us nothing about when that backlog of work will be done.

Where's the Product Roadmap and Release Plan?

Where's the process that defines that priority order?

Product Roadmap

Release Plan

What're the Measures of Effectiveness and Measures of Performance that define the priority order?

How about the Key Performance Parameters for those most important things?

Where's the estimate to complete developed from the past performed, risk-adjusted for future uncertainties - reducible and irreducible?

This notion is suggested by a leading agile thought leader, but it completely ignores the process of writing software when there is a needed delivery date, a needed budget, and a needed set of Capabilities for that time and budget.

26. Estimation destroys making decisions in the presence of uncertainty by premature commitments.

This is literally willfully ignoring the established, documented, tested, verified, validated processes of good software estimating.

Nothing else to say here. The author of that phrase must never have read a single book or attended a single class or observed a single successful estimating process. OR the Author is "selling" us a pig in a poke.

I think it's the latter.

25. Have 300 Product Backlog Items that you groom every two weeks.

Where's the Product Backlog and Release Plan showing when those Capabilities, Features, and Stories will be needed and when you should start definitizing their content, acceptance criteria, and top-level estimates?

Where's the Rough Order of Magnitude (not 10x, but just a rough estimate) estimates for each Feature in some Tee-Shirt size mapped to hours, taken from the empirical data of the past performance, collected from your agile software development management tool - automatically?

XS - 1 to 4 hours

S - 5 to 12 hours

M - 13 to 24 hours

L - 25 - 48 hours

XL - 49 - 64 hours

Set the number of hours per day 6, to cover other work.

For XL Stories, they will be sliced into smaller stories, as has been done for 30 years in all good software development domains.

The Development team will confirm the Product Owner's estimate in Tee-Shirt sizes makes sense from their understanding of the Story and the historical data captured in the development management tool, from all the past work (Reference Class Forecasting) to confirm the PO's estimate is credible.

Prior to the Sprint becoming active, Sub Task hours are estimated by the Development Team in a "Story Time" session before a "Capacity Based Commitment" is made.

Where's the Product Owner to produce those ROM estimates so the developers don't have to?

Why are you reestimating work that hasn't be definitized yet? Work that may not be started for months?

Why you grooming Stories and Features for Sprints beyond the next two. This is supposed to be an Agile project, where feedback from the User drives the emergence of new and better requirements? You're locking the requirements for Features and Capabilities that haven't been verified by "working software."

Another perfect example of Doing Stupid Things on Purpose, then claiming NOT estimating will fix them.

24. Have a business and technical management process that only produces 8 hours a week of productive work. Where up to 80% of the duration of the work is chewed up by delays, dependencies, interruptions, illness of workers and related absences of the workforce.

Where in the business is the Adult Supervision to allow the workforce to only has 8 hours of productive work out of the 40 hours of available work during the week?

Think about this. 96 minutes out of 480 minutes a day are used to generate value in exchange for the 480 paid planned work.

A sample of 6 departments and firms, shows about 32 hours of planned productivity a week (80%) and a measured productivity of 30 hours a week (75%). The non-productive hours include break, training, travel, non-project meeting.

The actual number of "sick" hours is very low on an annual basis, but those hours are baked in the PTO (Personal Time Off) baseline across the year and easily modeled in the capacity planning baseline for the Scrum team.

23. A deadly sin in estimation: estimating the NUMBER of people for a project without any understanding of "team" or collaboration

If that's the level of understanding, skill, and experience in estimating, then those paying for the project need to find a better person to do the estimating.

This is one of those "toss" off lines, with no understanding of how estimates are actually made. Which is becoming clearer as time passes.

22. When you read about examples of bad management, like misusing estimating, misusing tools, misusing processes, even misusing paradigms, and you don't hear about how to Prevent or Correct those misuses, then's that's the very definition of Doing Stupid Things on Purpose. I'll start listing the links and examples of this classification of DSTOP below this bullet. Here's the first one

Cost accounting has its place - in deterministic systems, not complex ones. - No, it's not. Cost accounting is even more important in complex, evolving, emerging, uncertain environment.

In the presence of these uncertainties, cost accounting is critical, since the variabilities of the budget impacts from the actual costs must be known to some degree of confidence to make credible decisions for the future.

21. A classic fallacy of #Noestimates - Estimates: Never credible if the person giving them is not risking anything when giving them.

This willfully VIOLATES the core principle of Independent Cost Estimating (ICE) which is an independent cost estimate process to assist in determining the reasonableness or unreasonableness of the bid or proposal being evaluated and is required for all procurements regardless of dollar amount. Research shows, if those making the estimate do have skin in the game they're going to game the system.

The principles of an ICE include

Developing the estimate without contractor influence

Define and validation the best value and shared contract risk

Based on market research, reference classes, parametric models from those reference classes

An analysis of reasonable and required resources for performance to planned work

The project, anticipated, or probable cost and price of the proposed solution

A benchmark for establishing cost/price analysis of the proposed solution

The Independent Cost Estimate is used to

Project and reserve funds for the procurement as part of the acquisition planningprocess

Determine if assumptions in a cost proposal are based on the same or similar assumptionsas used by the firm acquiring the solution

Satisfy the governance and oversight requirements of the firm providing the money

Everywhere I and everywhere others in our community of Software-Intensive System of Systems works, the ICE teams validate and verifies to cost and schedule estimates on behalf of senior management and the firm, but of Tim Lister's advice

Risk Management is how Adults Manage Projects

20. Listening to a #NoEstimates talk where most of the topics are blatant examples of Doing Stupid Things on Purpose.

It seems there is money to be made in conferences, training, coaching and maybe even consulting confirming that the client is DSTOP but the fix is NOT to fix the root cause, but simply stop doing that dysfunctional action. This, of course, leaves the root cause of the project's failure in place, while providing NO solution other than a feel-good session where complaining about bad management takes place.

19. Let me keep Features and Stories in the Product Backlog for 6 years.

Never asking the simple question do we need these features for the Capabilities in the Product Roadmap? And when will we need them to meet or plan to deploy them to the market or to our internal user community?

18. Product Roadmap? we don't need no stink'in Product Roadmap. It's a waste, and all that grooming of the Roadmap and Product Backlog is just waste. Let's code the next important thing and have the customer tell us what to do next.

So those paying have no visibility to when the needed capabilities - which are composed of Features - will be ready.

No visibility to the increasing value delivered to the customer and when that Value is planned to arrive, so the customer can plan as well.

No visibility to the Estimate to Complete and the Estimate at Completion.

No visibility to the reducible and irreducible risks, when they will be bought down. How much margin is needed?

17. Let's ignore all the well-known biases for estimating, and just continue on as if they didn't exist. Let's ignore all the well known preventative and corrective actions to address those well-known biases and pretend we have no choice other than be subject to and subjugated by them.

A good example of DSTOP.

16. Let's ignore the fact that all projects operate in the presence of uncertainty (aleatory and epistemic) and assume that simple, non-risk adjusted past performance will be the performance of the future, and we can use that to forecast what will happen in that future, with no adjustment for past variance, or emerging uncertainties or the range of variances in the future.

DSTOP at it's best.

15. Let's rename established estimating processes about the past, present, and future to a term that we name "not estimating" - Forecasting.

When we hear willful ignorance of basic high school mathematics you have to wonder what else don't they know.

14. The claim we can make decisions with past data without assessing if that data represents the future performance of the project. And use that data without estimating the possible variances of the emerging future behaviors. And call that forecasting and that way not have to say that approach is estimating in support of our #NoEstimates moniker.

This goes back to day one when the originator of the has tag claimed you can decide in the presence of uncertainty without estimating.

13. Let's spend more on estimating than on the effort to produce the product.

DSTOP at its best.

12. Let's make a change control process that allows no changes, or is so hard that change is made and no one knows about it.

11. Let's use under-sampled, non-statistically adjusted past performance for future performance and ignore that the past may or may not be in the future.

This is the classic #NoEstimates argument of #Noestomates - "we use empirical data, we don't estimate"

Of course that empirical is a "time series" with random values driven by the underlying uncertainties of the past.

The naive assumption that the future is like the past - statistically as well as behavioral - is just that naive.

10. When I say NO I really mean YES or Not Really, or Not everywhere.

9. I've given many seminars and asked people what their problems are and take all those into account for my approach. This was a self-selected group, none of which had financial accountability to the firm they worked for.

8. I wasn't paying attention in the Statistics, Microeconomics, or Business Management class, but listen to me anyway because I've got a lot to say about those topics.

7. I know I get a divide by zero error when calculating ROI, but hey who cares, it's just someone else's money they won't really care.

6.. Let's rename standard mathematical terms to fit our oxymoronic concepts of how to avoid telling those paying our salaries how much this will cost in the future.

5. We accepted a cost estimate from our bosses that was lower by 10x to 100x from the actual cost.

4. We started developing software without really understanding what Done looks like.

3. We accepted this project for the price the customer wanted to pay and we'll discover the requirements as we go along.

2. We think we can make decisions about how to spend other peoples money without having to estimate how much money, time, or the probability that we will successfully deliver what we promised for that money.

1. We've never done this before, and have no one on our team who knows how to do the work. The customers hired us to spend their money without realizing we actually don't know what we're doing, so let's not tell them and let's start spending.

A highly ethical firm here. If you don't know what to do, go find someone who does. It's that simple.