This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Is Fixed-Price Software Development Unethical?

By Scott W. Ambler, July 18, 2008

Reducing risks, better serving customers

Some Inconvenient Truths

Formal cost estimation strategies—COCOMO I/II, Function Points, Feature Points, and SLIM, to name a few—are based on the same fundamental strategy. First you identify the scope of the system and
a potential architectural solution that addresses that scope. The greater the detail of your specification efforts, the greater your ability to accurately estimate the effort required. Second, you glean critical sizing measures from these specifications. Third, you take measures from historical project data, ideally from your own organization. Fourth, you put all of these measures into a "magic formula," which gives you your estimate. The measures and formula vary between estimation techniques, but the high-level process remains the same. As a result, these formal cost estimation strategies all suffer from a common set of challenges:

We don't seem to be good at estimation in practice. The Standish Group's "Chaos Report" (www.standishgroup.com) found that only 31 percent of IT projects are reasonably on time and on budget and that on average projects cost 189 percent of what was originally estimated. These results belie the beliefs that fixed-price projects put a maximum on the financial outlay and that it's possible to reasonably compare projects based on their initial estimates. This makes it difficult to identify the potential ROI of an IT project, to compare the responses to an RFP, or to perform reasonably accurate annual budgeting.

We don't seem to be very good at estimation in theory either. In 1987, Chris Kemerer reported in "An Empirical Validation of Software Cost Estimation Models" (Communications of the ACM, May 1987) that formal estimation techniques had error rates of between 85"772 percent when applied in an uncalibrated manner. So, if you don't have historic data specific to your organization, you're likely in serious trouble. However, Kemerer found that with significant calibration for your environment, often increasing your overall planning costs past the point of diminishing returns, estimates became 88 percent accurate. The bad news is that this study was done during the 1980s, a period where we had significant experience with the dominant technologies of the time (COBOL, C,). This paper is just an exemplar; there are dozens of others that show that formal estimation strategies do not work as well in practice as some of their protagonists may claim.

There isn't a very good connection between the theories being espoused and actual practice. Magne Jxrgensen and Martin Shepperd reported in "A Systematic Review of Software Development Cost Estimation Studies" (IEEE Transactions on Software Engineering, January 2007) that there still needs to be significant research into the practical application of software estimation techniques, particularly in real-world settings as opposed to arbitrary data sets. Although they included several hundred references to estimation papers, they couldn't find a single study that examined the application of formal software cost-estimation techniques in a real-world setting. There were many that used actual historical data, but no study explored how the estimation strategies were applied in practice. In short, there appears to be a lot of blind faith in the veracity of up-front estimation techniques.

Everyone isn't created equal. As early as 1968, Sackman, Erikson, and Grant reported a 20-to-1 productivity ratio between the best and worst programmers ("Exploratory Experimental Studies Comparing Online and Offline Programming Performance," Communications of the ACM, January 1968). Since then, flaws in their methodology have been pointed out, and arguably a ratio of 10-to-1 is more realistic. More recently, Barry Boehm reported a 10-to-1 ratio in Software Cost Estimation with Cocomo II (Addison-Wesley 2000). The point is that unless you know exactly who will be doing the work, or at least have a way to guarantee the category of people who will work on the project, then your historic data proves questionable.

Customers know we can't estimate up front effectively. Dr. Dobb's 2007 Project Success Survey (see Defining Success found that 87 percent of business stakeholders are clearly more interested in achieving good ROI than coming in under budget. They might be asking us for fixed-price projects, but that's not really what they want—begging the question why we're even on this fixed-price treadmill at all.

Historical data isn't accurate. Formal estimation techniques rely on historical data, and even when that data exists it is rarely accurate. Having a decade or more of historical data is of little value when the technologies that you're working with change every couple of years, when the composition of your teams change, when the pool of people from which you draw from change, and when the skillsets of your people change (hopefully for the better). Part of the foundation on which your estimate is based is constantly shifting, reducing your ability to be precise.

Up-front estimation motivates Big Requirements Up Front (BRUF), which in turn results in significant waste. To reduce the risk of uncertainty in your estimates, you need to increase the level of information that goes into those estimates. This motivates you to do detailed modeling of your requirements and potentially even your architecture. This unfortunately ignores the facts that people aren't very good at defining up front what they want and even if they were, the requirements will evolve anyway. The implication is that the sizing measures that are input into your estimate are also on shifting ground. The "Chaos Report" shows that projects that took a BRUF approach ended up delivering systems (when they delivered anything at all) where 45 percent of the functionality was never used by stakeholders. So, by insisting on a fixed-price strategy, often to minimize financial risk, customers ensure that on average close to half of their development budget is wasted. (I've written a detailed article about this phenomena.) Yes, up-front estimating may motivate IT providers to create detailed plans, but apparently they aren't very good in practice.

Being held to an initial estimate motives IT providers to adopt strict change management, which in turn prevents the customer from getting what they actually want. The only real option to protect yourself from the risks of fixed-price IT projects is to adopt a traditional change management strategy where you make it difficult for the customer to modify their requirements. These processes are usually onerous, make potential schedule slippage explicit (a good thing), and typically make the customer pay handsomely for any changes. These factors reduce customer motivation to change their defined requirements, even though their requirements have in fact changed, effectively turning the change management strategy into a change prevention strategy. Not only do traditional change prevention strategies exacerbate the challenges associated with BRUF, they are ethically questionable because you're effectively motivating customers to pay for functionality they really don't want. You still need a strategy for change management, but it doesn't need to be the bastardized processes that we see in the traditional world. Agile change management strategies treat requirements like a prioritized stack that is managed by the stakeholders, and the development team merely pulls work from the top of the stack each iteration, thereby producing the greatest ROI as defined by the stakeholders. This lets stakeholders manage the scope, thereby getting software that actually meets their needs with the trade-off that they are now accountable for these scope changes.

You waste time and effort on overly precise up-front estimating. We need to question whether the investment being made in up-front estimating to increase its precision is worth it. It's likely a good investment of $10,000 to determine that your project will likely come in between $2.3 and $3.5 million, but it likely isn't worth an additional $100,000 to determine that it will be between $2.85 and $3.1 million. Customers often don't realize the amount of effort required to obtain just a bit more precision in our estimates.

Is Fixed-Price IT Ethical?

There are some very clear and obvious challenges with fixed-price IT projects. I believe that this is the direct result of the inherent nature of IT projects. Traditionally we've wanted to believe in the concept that "software engineering" is 80 percent science and 20 percent art, but in practice development has proven to be closer to 20 percent science and 80 percent art. Or perhaps the 20 percent of software engineering that is art is simply 16 times harder than the 80 percent that is science. Regardless, after four decades of struggling with fixed-price IT projects, building a buffer into your estimate is merely a band-aid that reduces some of your risk. It's time that we start questioning the ethics of approach.

For a fixed-price IT project to be considered ethical, three criteria must be met. First, the customer must explicitly insist on a fixed-price approach. Second, the customer has been educated as to the inherent risks associated with fixed-price IT projects (feel free to give them a copy of this column). Third, the customer has been given other choices, such as staged-gate funding common on large projects or continuous-flow funding common with agile projects, has had the trade-offs explained to them, and has still explicitly chosen a fixed-price approach.
As professionals, we must actively question the ethics of what we do, even when we've been doing these things for decades. We know that we have significant challenges around formal approaches to estimation and to traditional project management in general. Too many IT professionals give up when faced with customer demands for fixed-price IT projects, thereby choosing to work in a manner that dramatically increases project risk. We have to stop accepting the status quo and push back against our customers and help them to rethink this strategy.

We fundamentally know that fixed-price IT projects are a very poor way of working. Luckily, so do our customers. Whenever a customer insists on a fixed-priced IT project, even within the scope of an RFP that's been put out to bid, ethically we must attempt to dissuade them from this perilous path. Unless we start providing a consistent front against fixed-price projects, we will never get off this treadmill that we find ourselves on. We must actively choose to reduce the inherent risks in our industry, and the desire by customers for fixed-price IT projects is likely the greatest one that we face.

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task.
However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

This month's Dr. Dobb's Journal

This month,
Dr. Dobb's Journal is devoted to mobile programming. We introduce you to Apple's new Swift programming language, discuss the perils of being the third-most-popular mobile platform, revisit SQLite on Android
, and much more!