It sort of reminds me of the old joke that there is always an asshole in every group. Look around, if you don't see one you are the asshole.

Every sector of business has produced bad applications. If you can't find an example around you, it is probably your application that that is the piece of trash destined to suck the life blood out of every programmer that touches it.

Get over it... that's the problem, "managers" who are like you get their job and are too lazy to educate themselves more on software management. This article isn't exactly new material, but it stresses the importance of having formal training and knowing when to make these things called "decisions" which, amazingly, make an impact on business.

I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?

I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.

There are only a few changes you should plan for like a change in the database design. I would just document where you need to update code and queries to accomodate a new column rather than attempt to make the application independent of its database design.

I am currently working on an application that loads the entire database schema into jagged arrays so you have no idea what column or table you may be dealing with in a query.

I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?

The first writing is almost always a throw away, you should expect it and accept it, it will be significantly better the second time around. Once you build the code once, you have learned 90% of what was wrong with the design. A large rewrite at that point will pay off exponentially down the road.

This does not "prove" in any way, any dubious axiom that "private industry always does it better than the government". The problem with VCF was more to do with lack of adequate oversight. The people writing the checks were not held responsible for the results. So they didn't make sure that the people writing the software were held responsible for the results. Sad state of affairs, but the fix to this problem is: get rid of corrupt politicians. (yes; both parties - yes, it's the campaign finance system, STILL).

I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.

No amount of documentation can fix an unmaintainable system. A really good system needs minimal documentation. Too much documentation means more of it will get outdated sooner, and not be fixed, and more of it will be simply wrong.

On the other hand, it's certainly possible to make a system so general and with so many layers of abstraction that it becomes unmaintainable as well. To be maintainable, a system or API needs first and foremost be be simple. Then you don't need much documentation. Abstractions that remove complexity are good (unless you need the more complex features and cannot bypass the abstraction). Abstractions that add complexity are bad.

As for knowing when to generalize, Haack lives by the rule of three: "The first time you notice something that might repeat, don't generalize it. The second time the situation occurs, develop in a similar fashion -- possibly even copy/paste -- but don't generalize yet. On the third time, look to generalize the approach."

P.J.Plauger published an article many years ago. I think it was in Software Development magazine, sometime in 1989. The article was about what he called the "0, 1, infinity" rule.

In essence, once the pattern goes multiple, it is time to design accordingly. Switch from the singular pattern into a multiple pattern and be ready for all future enhancements of the same nature. They will come. Build it. :-)

It's definitely not just government. I work for MegaCorp (tm), which had a project that was designed to replace an old system and some bits of other systems.

Project was to take a year, and cost roughly $1 million a day.

It followed exactly the same pattern described here, and it was delivered last week. 2 years late, slower than the old system, with crippling bugs still evident and it doesn't go any way to replacing anything other than the single system we wanted to replace.

I'm just glad I got pushed off the project after bursting into laughter when I saw the first delivery of code. Even my old boss who stuck it through to the end said that was the point it should have been taken round the back and shot.

I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.

No amount of documentation can fix an unmaintainable system. A really good system needs minimal documentation. Too much documentation means more of it will get outdated sooner, and not be fixed, and more of it will be simply wrong.

On the other hand, it's certainly possible to make a system so general and with so many layers of abstraction that it becomes unmaintainable as well. To be maintainable, a system or API needs first and foremost be be simple. Then you don't need much documentation. Abstractions that remove complexity are good (unless you need the more complex features and cannot bypass the abstraction). Abstractions that add complexity are bad.

Ah, generalisations ...

Allow me to throw in my own.

There is no correlation between the chewy goodness of a system and the amount of documentation required. Thus, I think, your (unintentionally) weasel words, "minimal" and "too much."

There is, however, a distinct correlation between the literacy skills of 95% of programmers and the amount of documentation, leave alone decent and useful documentation, produced. This correlation leads to the under-production of documentation.

There is also a distinct correlation between the "jobsworth" tendencies of the average, process-obsessed PHB and the amount of (entirely useless) documentation produced. This correlation leads to the over-production of documentation.

Most projects, in documentation terms, are a heady mix of the two tendencies. In other words, you get lots of documentation to choose from, but none of it is useful. One project in my past had an entire server to hold the documentation -- almost all of which consisted of minutes from meetings over the previous three years. On the other hand, a description of the database schema was nowhere to be found.

I'm continually amused by this agiley claim that "documentation gets outdated and [won't] be fixed and will [largely] be wrong." Ho ho, eh? You'd never say this about code, of course. Oh no. You can never have too much code.

And it never gets outdated and won't be fixed and won't [largely] be wrong.

To throw fuel on the private versus public sector fire, consider the Fannie Mae CORE project. This is a private company helped by private "world class" partner to spend hundreds of millions of dollars of stockholder's money to produce a system that could not be made to work. Bad management, bad requirements, bad architecture, bad engineering and bad testing.

It had every one of the issues discussed in Avoiding Development Disasters article and invented a few of its own ways to fail.

I think something that is often overlooked in the cycle of failure that plagues large organizations is culture.

A few years back I worked as a consultant for a global corp. I was on a team of about fifteen analysts helping them with an enterprise wide order provisioning system. We were tasked with helping to define and manage the requirements and then to coordinate integration testing and UAT.

From the outset there were problems because there was massive distrust of anything that the internal IT department was involved with. IT had experienced a long run of WTFs, releasing product after product that never made it past beta (and the affected user base continued to use paper based 'systems'). In the case of the project I was working on, the centralized database for the existing 'system' was a room full of physical files.

So anyway, here was a user community in desperate need of some help. Even an adequate IT solution would have been an upgrade to manually walking folders around the office to fulfill orders. But the users wanted no part of the project because of IT's reputation. And that became a self-fulfilling prophesy because some SMEs more or less refused to take part in requirements definition. Their attitude was 'build it and then I'll have a look at it'.

So because it is so 'easy' to fail on a large project, I think it's common for IT groups in large organization to develop bad reputations with their internal clients. And from there it becomes difficult to recover.

Just my 2 cents

Oh, and to tell what happened on the project. The software did eventually launch into production and the teams that had provided SMEs to help define with requirements were happy with the product and the teams that hadn't wanted to scrap it. Last I heard, it was being pulled from production after about two years.

"In most other industries, equating completion and success would be ludicrous. After all, few in the construction industry would deem a two-year-old house with a leaky roof and a shifting foundation anything but a disaster -- and an outright liability."

You don't know some of the builders I do, and I congratulate you on that.

Sad state of affairs, but the fix to this problem is: get rid of corrupt politicians. (yes; both parties - yes, it's the campaign finance system, STILL).

Sorry, but politicians are not the problem here. It is bad executive management, bad project management and bad software engineering. Politicians bring in their own evils as do the general state of development practices within the DOJ (at the time and now), but this is clearly a common issue. Large projects fail because often because organizations don't practice the necessary techniques nor do they have sufficiently skilled management and staff to succeed.

But you are right, holding those writing the checks accountable is part of the solution. As is holding the architects and the head project managers. In too many cases no one ends up accountable for massive failures. No pain, no improvement.

"In most other industries, equating completion and success would be ludicrous. After all, few in the construction industry would deem a two-year-old house with a leaky roof and a shifting foundation anything but a disaster -- and an outright liability."

You don't know some of the builders I do, and I congratulate you on that.

I too used to give construction industry counter-examples to poor software engineering. Having gone through the construction process of a few homes now I no longer make that mistake. It's not that they are worse than software efforts, it's that they aren't much better.

I hope construction of larger buildings, bridges, dams, etc. are done far better. But I know there are examples that show that this is not always so.

I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.

No amount of documentation can fix an unmaintainable system. A really good system needs minimal documentation. Too much documentation means more of it will get outdated sooner, and not be fixed, and more of it will be simply wrong.

amen

brazzy:

On the other hand, it's certainly possible to make a system so general and with so many layers of abstraction that it becomes unmaintainable as well.

I agree wholeheartedly

brazzy:

To be maintainable, a system or API needs first and foremost be be simple. Then you don't need much documentation. Abstractions that remove complexity are good (unless you need the more complex features and cannot bypass the abstraction). Abstractions that add complexity are bad.

well put!

I think one big problem about the documentation debate that happens between developers is that different developers take the general term "documentation" to mean different things.

One form of documentation should describe a system, it's subsystems, and how it connects and interacts with other systems.

Another form of documentation should describe the hack that had to be put in place to make the language/OS/device/etc. work.

The code itself should be a form of documentation by describing what it is doing through variable and method names.

A differnt form of documentation can describe the different levels of abstraction to help map new developers to important components much quicker.

These are all forms of "documentation", but aren't the same thing. There are other forms of documentation, but this post is already too long. Not all programs need each type of documentation.

The types and detail of documentation needed in an application is as specific to the application as its design. Maintaining the documentation is just as important as maintaing the code.

Wow, training and experience are needed to run a successful project. Who'd a thunk it? The problem is twofold. Everyone thinks they're an expert in whatever job you assign them. Secondly, 90% of projects are filled up with people who solely are in to grab as much money as they can. The more consultants you add in, the quicker the money is bled with little regret of a failing project. Once the money is gone, they'll just move on to the next chump.

Computers only do exactly what someone tells them to (be they user, programmer, or hardware engineer). Saying "it's not the technology or the tools, it's the people" is kind of redundant. Though, being redundant doesn't make it wrong.

Wow, training and experience are needed to run a successful project. Who'd a thunk it? The problem is twofold. Everyone thinks they're an expert in whatever job you assign them. Secondly, 90% of projects are filled up with people who solely are in to grab as much money as they can. The more consultants you add in, the quicker the money is bled with little regret of a failing project. Once the money is gone, they'll just move on to the next chump.

In large part, I agree with your characterization of consultants because most consultants come from the school of 'information hiding'. They rarely go into it with their client's best interest at heart. Their real goal is billable hours and therefore, they can actually become barriers to success.

When I was a consultant, I occassionally interfaced with folks from Andersen Consulting (or Arther Andersen or whatever they were called) and they were just the worst at this. I would characterize them as a bunch of c*** blockers who did everything they could to stretch projects out.

Oh, please. Most of the worst stories on this site have the word "Enterprise" in the title. Designing every application to be enterprisey right from the start is a recipe for disaster. Too much planning is just as worse as none at all. Planning too far ahead ends up smack against that cold hard thingie called "reality". The trick is to reconsider the scope of the project at every stage, and have the guts to react appropriately. A complete refactoring around 10000 LOC is manageable. At 100000, I'm not so sure anymore. At one million... yeah, I'm kidding.

In other words do plan, and do what's appropriate for the project. Just don't rush to do the most impressive thing you can think of.

I'm continually amused by this agiley claim that "documentation gets outdated and [won't] be fixed and will [largely] be wrong." Ho ho, eh? You'd never say this about code, of course. Oh no. You can never have too much code.

And it never gets outdated and won't be fixed and won't [largely] be wrong.

I don't understand what you are trying to say. When an agilist says that the docs will become outdated, this means with respect to how the application actually works (the code). How can the code be outdated with respect to the code?

I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?

The first writing is almost always a throw away, you should expect it and accept it, it will be significantly better the second time around. Once you build the code once, you have learned 90% of what was wrong with the design. A large rewrite at that point will pay off exponentially down the road.

I'm currently in 3.5 years into a large software project, and we're nearing the end of the "REWRITE LARGE SWATHS OF CODE" part. Our path followed this diagram. I feel warm and fuzzy, we're nearly there.

Interestingly neither the article nor any of the commenents mention software project risk management[*]. Either risk management was non-existent, or the management decided to ignore all classical warning signs of an impending project failure.

I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?

When I was a consultant, I occassionally interfaced with folks from Andersen Consulting (or Arther Andersen or whatever they were called) and they were just the worst at this. I would characterize them as a bunch of c*** blockers who did everything they could to stretch projects out.

Arthur Andersen was a financial auditing and accounting company that disbanded after the Enron disaster, and Andersen Consulting was its IT/Management consulting spinoff that was renamed to Accenture in 2001.

And yes, they have a very bad reputation for doing anything to increase their share of a project - but pretty much all IT big consulting companies have the same reputation, and well-earned. It's the smaller companies and the freelancers that can supply quality and solve problems rather than prolonging them - if you find the right ones. They won't bloat a project in order to put in 20 more of their people because they don't have 20 more people on short notice. Of course that's not a good thing when you actually need 20 more people, which is why the big companies get hired.

Well, VCF failed and got scrapped by the government. That's where our german government is fundamentally different: Here, huge failures in software development (A2LL for example) just go in production anyways...

Knew someone who worked on this, and the story from the developers is not the same a presented here.

This project was originally one of theses huge SEI CMM projects. You deliver the software and it is tested against the list of all requirements, make sure you get under the number of faults allowed, fix those get the final version accepted and hope you win the contract for the next release.

Development was going good, they were replacing the old system and adding a bunch of new capability. Then 9/11 happened and the FBI decided to change everything around.

So SAIC, the company developing this software, got the FBI to switch to a more angilish system where they would deliver some basic features get those tested start getting that deployed and continue. They did that came out with the first version and the FBI went and tested it as if it was the final version.