I always read about large scale transformation or integration project that are total or almost total disaster. Even if they somehow manage to succeed the cost and schedule blow out is enormous. What is the real reason behind large projects being more prone to failure. Can agile be used in these sort of projects or traditional approach is still the best.

This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.

8

One curious thing about this problem is that you usually get completely different answers from developers and from managers.
–
mojubaNov 25 '10 at 12:33

3

@mojuba I'm both, and I answered. I hope that doesn't result in a diagnosis of multiple personality disorder.
–
Tim Post♦Nov 25 '10 at 16:52

1

Agile is best when the customer does not know what they want. Companies are generally unwilling to spend the huge amounts that tend to get into newspapers on projects that are poorly defined.
–
TangurenaNov 28 '10 at 21:08

1

Massive project failure like this seems to happen more in government institutions than in private industries, or at least it seems to be in the news more often.
–
BratchMay 22 '12 at 20:48

21 Answers
21

The main reason is an increase in scope, which the book "The Pragmatic Programmer" describes as:

feature bloats

creeping featurism

requirement creep

It is an aspect of the boiled-frog syndrome.

The idea of the various "agile" method is to accelerate feedback and - hopefully - correct the evolution of the project in time.

But the other reason is release management: if you aren't geared toward releasing the project (however imperfect it may be), chances are it will fail (because released too late, with too many buggy features, and harder to fix/update/upgrade).

That does not mean you have to have a fixed release date, but that means you must be able at all time to build a running version of your program, in order to test/evaluate/release it.

I know the ‘Getting Real’ thing to do would be to Flex the scope and keep the launch date fixed, but that doesn’t work if there is agreed upon functionality that cannot be completed in time.

That’s why we don’t advocate specs or “agreed upon functionality.” That’s the root of the problem — saying you know everything about what you need and how its going to be implemented even before the first pixel is painted or line of code is written.

When you predict a rigid future on a flexible present you’re in trouble. Rigid futures are among the most dangerous things. They don’t leave room for discovery, emergence, and mistakes that open new doors.

+1: although I might have said grossly underestimated
–
Ken HendersonNov 25 '10 at 18:28

2

There are many reasons for underestimating. I’ll point out a few: In a complex project a very small change may have a very large impact. So one could say this wasn’t a small change in fact it was a large. There is however a mentality that if something is very easy to implemented it shouldn’t be any big deal. In fact a little change in business logic may have a large impact since the project is complex. Other causes: lack of budget which lead to less time in “Analysis and Design”. “trial and error” mentality instead of putting more time in “Analysis and Design”. Lack of competence.
–
Amir RezaeiNov 29 '10 at 13:54

2

@Pratik, complexity is often underestimated because programmers (myself included) are notoriously bad at assessing the complexity of a project. This is probably because when you first think about a project, you only see the general outline - but you don't see the thousands of little details hiding just beneath the surface. For example, when presented with some new web project, I have to resist the instinct to think: "that's easy - I'll just throw together a database and some front-end Javascript code. I should be done in about a week." But of course, it's never that easy.
–
Charles SalviaNov 29 '10 at 20:35

Some ineffective development practices have been chosen so often, by so many people, with such predictable, bad results that they deserve to be called "classic mistakes"...

This section enumerates three dozen classic mistakes. I have personally seen each of these mistakes made at least once, and I've made many of them myself...

The common denominator in this list is that you won't necessarily get rapid development if you avoid the mistake, but you will definitely get slow development if you don't avoid it...

For ease of reference, the list has been divided along the development-speed dimensions of people, process, product, and technology.

People

#1: Undermined motivation...

#2: Weak personnel...

#3: Uncontrolled problem employees...

#4: Heroics...

#5: Adding people to a late project...

#6: Noisy, crowded offices...

#7: Friction between developers and customers...

#8: Unrealistic expectations...

#9: Lack of effective project sponsorship...

#10: Lack of stakeholder buy-in...

#11: Lack of user input...

#12: Politics placed over substance...

#13: Wishful thinking...

Process

#14: Overly optimistic schedules...

#15: Insufficient risk management...

#16: Contractor failure...

#17: Insufficient planning...

#18: Abandonment of planning under pressure...

#19: Wasted time during the fuzzy front end. The "fuzzy front end" is the time before the project starts, the time normally spent in the approval and budgeting process...

#20: Shortchanged upstream activities... Also known as "jumping into coding"...

#21: Inadequate design...

#22: Shortchanged quality assurance...

#23: Insufficient management controls...

#24: Premature or too frequent convergence. Shortly before a product is scheduled to be released there is a push to prepare the product for release--improve the product's performance, print final documentation, incorporate final help-system hooks, polish the installation program, stub out functionality that's not going to be ready on time, and so on...

Product

#28: Requirements gold-plating. Some projects have more requirements than they need right from the beginning...

#29: Feature creep...

#30: Developer gold-plating. Developers are fascinated by new technology and are sometimes anxious to try out new features... -- whether or not it's required in their product...

#31: Push me, pull me negotiation...

#32: Research-oriented development. Seymour Cray, the designer of the Cray supercomputers, says that he does not attempt to exceed engineering limits in more than two areas at a time because the risk of failure is too high (Gilb 1988). Many software projects could learn a lesson from Cray...

Technology

#33: Silver-bullet syndrome...

#34: Overestimated savings from new tools or methods... A special case of overestimated savings arises when projects reuse code from previous projects...

I blame the bidding process. It rewards the group that can make the deal look cheapest/quickest on paper.

The people putting together bids don't want to waste their time if they have no chance of winning, so their normal estimations get put on hold. I know people who have specified normal switches instead of POE switches to save $80. But the project needed POE because it had IP cameras. That $80 needs to be spent, but now it is outside of the spec.

I have a firm belief that a 2 month $2,000,000 project will still take 2 months $2,000,000 no matter how many bids you get. If you think doing it right is expensive, wait and see how expensive it is to do it wrong.

One possible reason is that estimates are based on smaller projects, assuming a linear growth in cost with the project size, when in fact cost growth is e.g. quadratic due to increasing complexity, longer project duration (more time for requirement changes) etc. Estimating is hard, and the bigger the project, the harder it gets to estimate correctly.

Another reason are optimizm biased estimates: To win the bidding, best-case estimates are used to calculate the price. The larger the project, the less likely is a best-case scenario. Bidding rules make it likely that the most optimistic offerer gets the acceptance, so even if 5 vendors make a realistic estimate and the 6th is too optimistic, the optimistic one wins the bidding and fails later. So this is kind of a negative selection.

Cost does not preclude schedule in the eyes of 'management' which is an important distinction to make. As we know, "nine women can't make a baby in one month", yet you'd be surprised at how many people think that problems decrease in depth in relation to the amount of money that is thrown at them. Bad project management, often manifesting itself in the form of micro management is the leading cause of most projects tanking (in my experience). Micro management kicks in when 'management' realizes that something is getting out of control and they are clueless as to why.

When that isn't the cause, the expected outcome of the project was probably not tenable to begin with. In my experience, if the time frame of a project is too short, people will be so afraid of making mistakes that result in 'double work' that they don't get much of anything done at all.

This is why management should be populated with seasoned programmers who have a history of leading teams that produced successful projects. Such a person might say "No way could we do that responsibly" despite the possible revenue, and would not be in management for long, which is why many of us (ultimately) answer to MBA's instead of PHD's.

I lost count of the number of companies that I've worked for where a non-programmer was in charge of hiring programmers. I had an interview once where the hiring manager wanted to do nothing but discuss a recent sporting event (I think it was a football game). If the person you have in charge draws more inspiration from an NFL coach than Knuth, the project is going to tank.

Once in a while, you run into something that was well planned, well understood, realistic and seemingly straight forward. For whatever reason, six months into development, everything reversed itself. It happens. Rarely, however is that the underlying cause of a project becoming a glorified pork barrel.

Still, I have to admit .. if you watch the news, you might see an occasional motorcycle accident or train wreck. You never hear about the millions of motorcycles or trains that arrive on time every day without incident. The same goes with projects. Sure, it's interesting to see a public autopsy of something that went really, really bad, but you almost never hear about stuff that went really, really well. I think that tanked projects are still the exception, not the norm.

People tend to think that software development is a predictive process, trying to measure and estimate things one year ahead. This is not possible! Building software is not bolt manufacture.

Following the same "trend", they try to do a huge analisys (again one year ahead) thinking that they'll cover all the possibilities, and later, turning programmer into mere typists. How come one think that this could work? This kind of behavior just leads to bad estimates and lots of bureaucracy.

The larger the project, the more likely you are working for a large organization. The larger the organization, the more layers of management. The more layers of management, the harder it is for bad news ("we can't have what we want for what we can afford") to make it up the chain of command. The less likely bad news can make it up the chain of command, the more likely a fantasy plan will be accepted and then held to long after it is known to be untenable.

I was introduced to the concept of "Perception of Reality" early on im my programming carreer. For this I am truely grateful. I believe that this is the biggest reason that any project fails, not just IT projects.

This big projects tend to have an "all or nothing" mentality. The project as defined has to be released in one go - often as it's a change over from an existing system.

This means that the problems of feature/requirement creep are harder to address so when the project comes to fruition it's often seen as no longer meeting requirements. This can be exacerbated if the existing system has been updated or technology has moved on in the meantime.

What's the solution to this?

I don't really know as no one wants to have two systems running in parallel with a changing set of functions split between the two.

Software projects of all sizes "tend to fail" or "have cost overruns." You don't hear about the cost overrun at the business around the corner, but you do hear about things like the FBI Virtual Case system, or the Denver Airport baggage handling system. So I will make the claim that not all large systems fail, nor do all large systems have cost/schedule overruns.

I've come across large systems that came in on time (the schedule moved once and only once during the project) and on spec (I had no access to budgetary information as we were just 1 of many suppliers). One that impresses me still (and I've written a bit about it on this site) was a large integrated customer management system for a large (in the first 100 of the fortune 500) financial client. I estimate that they blew about $100k/day (for more than a year) on peoples' salaries during conference calls.

In the case of the baggage handling system, the software managers said "based on projects of this size and complexity, it will take 4 years to build and debug this system." The sales and executive managers said "the airport opens in 2 years, we told the client it will take 2 years, so you have 2 years to do it." The test to see if you are a programmer or a mismanager is a simple answer to the following question: "was the baggage handling system late or on time?"

If the customer knows exactly what they want (and very few do), they will be very far along the path to keeping costs and time under control (and these are the folks who tend to do quite well offshoring). If your project has to meet every single possible feature that your customers can possibly dream up, and every single department has veto power over when their pet goldbricks get added to the project, then you are doomed to abject failure from the start (like the FBI's VCF project).

The complexity of large project can be wildly exacerbated by external political pressures. One department may have a very clear, focused idea of what they want in the new system, but then associated departments jump in with dozens of requests along the lines of "Well, as long as you're doing that , why don't you do this little side task for us too?" You might start by saying "No, that's out of scope.", but then the political in-fighting among the deaprtments begins, and the budget for the project is threatened unless everybody gets their piece of the pie.

For years, our local police couldn't search for partial plates through the motor vehicle system, a feature that seems absurdly simple. I asked a friend what on earth was so hard about adding this feature, and they said that every time they proposed switching to a modern database, every other department in the state that had any interaction with the motor vehicle system wanted to get their portion of the system fixed too. The result was complete gird-lock in IT modernization. Finally the state put together enough capital to do a system wide modernization effort, which then floundered because it was so hideously complex.

Just about all the dramatic failures are contracts that were bid out. What happens to a competent company in such a situation? They make a realistic estimate and thus are almost certainly underbid by someone who made a bad estimate.

If the company can't estimate properly is it surprising they can't build a system properly also?

One reason for failures is that a big project is usually a high-profile, important-to-the business project. When projects and tasks are high-profile, it encourages people to lie.

Your boss wants you to estimate your completion status on the high side. He wants to estimate overruns and delays on the low side. When you encounter a problem, he doesn't want to hear that it will add three weeks to the task; he wants to hear you can work it into a couple of hours tonight.

And so on and so forth.

I was on one project several years ago, for a client. I was brought in after the bid and project plan were completed. There was constant pressure to go faster, faster, and ridiculous cost cutting decisions, heavy overloading of staff, no resources for them; no desks, computers, anything.

Finally, I discovered the project was bid at 7 months and 16 million dollars. I estimated on the back of an envelope it should be 24 months and 50 to 100 million. I set up a meeting with my manager and his manager, and presented my case, and how we were NOT coming anywhere near delivering on time or budget; they downplayed all the problems. At the end of the meeting the CIO called and told both these managers essentially what I said, with the exception of the flaw in the original bid.

I had a chance to roll off the project when they changed technologies to one I wasn't skilled at. I spoke with someone much later. The project ended up being cancelled when it was about half done...at 12 months and 35 million dollars.

Michael Krigsman over on ZDNet has a blog devoted to "IT Project Failures," that may be of interest here.

Another point is that with long projects that span years, there will generally either be upgrades to consider or alternative solutions,e.g. could a project now be done in the cloud versus on-site for something starting to come up more and more, that in considering these this wasn't a given when the project was started. For example, while one could start while something is at 6.0, by the time the first phase is done there may well be a 6.3 or 6.4 out and the question of when to upgrade is being asked. Changes in scope and desired functionality, either because requirements weren't gathered correctly or someone changed their mind, are another couple of points that have already been covered quite a bit.

Can agile be used in these sort of projects or traditional approach is still the best.

The reason why well applied agile processes don't seem to suffer from this problem as much is for a simple reason. You cannot start a large project in an agile manner. You can chose one or the other.

With an agile process, you are never really looking past one or two iterations into the future of the project. there is no 'plan' for the next 2 years, only the next two weeks. When your view is that short, you have to make a few compromises. You cannot ever start with a plan to make "The final word in widgets", for whatever sort of widget you are designing. At most, you can start with "A widget that can frob", because thats about the most work you can get done in one or two iterations.

The good thing about this, is after a few iterations, you already have a completed, working product that someone can find useful, especially that one customer who desperately needs a widget that can frob and zort.

Essentially, large projects can fail because they aim to solve all of the problems of all of the potential customers. An agile project never has this goal, instead addressing in each version just one critical issue a single customer might have. After a good long while, though, an agile project might be solving alot of critical issues for alot of customers.

Big projects have a nasty tendency to get into "infrastructure" mode for years, and forget about building real end-user features and shipping them. By the time it ships, it's very expensive to change, and usually the biggest conceptual changes end up being asked after the first real beta testing occurs.

Failure to accurately estimate cost

If projects seem like they will outgrow their return on investment, they get canceled.

Failure to control quality

With enough defects it is possible for forward momentum to fall to zero, or below. Without any progress at all, it's hard to argue for the continued existence.

Too many cooks - Major B&M Retailer where the emphasis suddenly shifted to the web. Suddenly 20 Department heads are dogpiling every initiative to impress the new head cheese. I once had to implement new icons because legal didn't like the look of the old ones.

Excess focus on matching specs over achieving goals - "IE6's icons are slightly faded compared to IE7's. Please drop that launch-date-critical work you're doing and attend to .05% of our customer base to do awful things that will take days to implement and slow down the IE experience even more."

Bad Tools picked by non-devs who couldn't even be bothered to ask their in-house devs for advice.

Bad devs picked by tools - "Why pay 20 competent Java devs a decent salary when we can outsource 200 barely code literate guys who know too little to use version control?" as they simultaneously and with people in different countries, work on the back-ends primarily for 3 large apps.

Bad/Broken Architecture - Layers upon layers of panicked, get-it-done-yesterday code, by people who were fired or are now managers.