Wednesday, September 15, 2010

It is almost common knowledge now that waterfall model does not work, especially if you talk to programmers leaning to the agile persuasion. In this article I am going to give waterfall model its deserved credit, and to look at software development from a different perspective.

Before start, I need to first make a disclaimer -- I am not talking about "strict" waterfall or "strict" agile, I don't think those extreme cases apply to most real life situations.To me, any development processes that falls into defining clear phases (initiation, analysis, design, development, testing) are using waterfall model, regardless compromises made to accommodate different circumstances.

First, let's talk a little about history. Waterfall model originated from manufacturing and construction industries, the waterfall model does not only suits the limitation of the projects of those industries (where flexibility is impossible when real goods are involved), but also suits their management method -- The Command and Control Management Method.

If a company using command and control management method tries to use agile development model, guess what will happen? Well, agile practices rely heavily on making constant changes based on new discoveries and shifting requirements which will need to travel back to the top and come back down to the developers. When too many changes are pending, the decision maker becomes a bottleneck while the developers are held up until a decision can be made. This does not only creates more pressure on the decision maker but also would actually delay the project delivery.

On the other hand, waterfall model tends to minimize the communication requirement between decision maker and developers. When the project is in analysis/design phase, developers can be given other tasks like maintaining existing projects. Once the construction starts, developers are called in and start doing the work. There will be only occasional cases when changes are needed AND possible, giving the decision maker plenty leisure to examine issues popping up without delaying the project much.

Of course, the better solution would be to change the management process to be more decentralized to avoid decision making bottleneck. But unfortunately, real life situations are normally far from ideal. Maybe there aren't enough competent developers to delegate responsibilities to (and no, you don't want to set everybody lose to wreck havoc to the code base); or the company has been burnt by having too many fingers making decisions; or the business model can tolerate low quality software thus improving the development process is lower priority; or maybe simply just because.

You have to really think hard about the situation before applying any best practices or the hottest methodologies. Sometimes you may have to make compromises and take the second-best option.