Notes from the Golden Orange

EppsNet Archive: Methodology

Michael James posted this annotated job listing in the Scrum group on Yahoo . . .

[Redacted] is looking for a dedicated and experienced application developer [blah blah blah] to ensure delivery of high quality artifacts, to adhere and to follow [Redacted]’s SDLC. This is an excellent opportunity [blah blah blah] well-known Fortune 50 company.

Tasks and responsibilities

[clip]

Provide accurate and timely estimates (work breakdown schedules)

Must have proven ability to provide project estimates and work-breakdown schedules

And you know these guys are getting great results from their precise WBS and SDLC because of these lines:

Must be extremely responsive, able to work under pressure in crisis with a strong sense of urgency

In the end it doesn’t matter what names you use for your processes, good people will do good work and continuously improve what they do. So much of the discussion around Lean versus Scrum (etc.) is about marketing hype, selling consulting and training services, and cornering the market with new name-brands. . . .

Scrum is not a methodology, it is not a process. It is a simple framework underpinned by some common sense principles. Scrum offers individuals and organizations the opportunity to continuously improve the way they work. It provides a space for people to behave like human beings, with trust, respect and passion. That’s about it. But that is huge.

I work with a company that has the following set of milestones in its standard project methodology:

Vision/Scope Complete

Requirements Complete

Design Complete

Definition Complete

Build Complete

Test Complete

Rollout Complete

I’ve noticed an interesting pattern at the weekly enterprise status meetings: a significant number of projects report being exactly on schedule for each milestone — not one single day ahead or behind! — until they get to rollout, at which point they suddenly go several months late.

Some things can be faked and some things can’t. As long as you have milestones that can be met simply by declaring them done, or by signing off on a document, you can always hit them on time.

But when it comes to putting actual working software in front of a customer, that’s when you really have to deliver the goods, and that’s when the milestones start getting missed.

This is a very high-risk approach to software projects. Deferring testing to the end of a project guarantees that if your project fails for any reason — and if your testing is honest, there’s always some non-zero probability that it will fail — you will have already invested in the entire cost of construction.

That’s why the history of software engineering is littered with big-ticket disasters. You never really know what you’ve got until the end, after you’ve spent all the money.

It’s also a good argument for iterative, incremental development. If you have to deliver working software early and often, you can’t fake it.

What the waterfall does well is to keep useless projects from resulting in useless code that needs to be maintained. I’m not sure if that’s the real purpose, but it’s certainly a great side benefit. It may sound inefficient to pay a lot of engineers to get started on projects, do a bunch of analysis and design, and finally abandon the whole thing when something else becomes a higher priority, but every line of code they don’t write is another line that can’t break!

OK . . . you could make a case that waterfall “worked” here — clearly if, after 18 years of effort, people can’t even define the project, that sounds like a project that has no chance of success and shouldn’t be attempted — but it worked at a cost of $2.5 million.

That doesn’t seem very efficient.

What I find is that if you put the customer, the technical team and other appropriate representatives together for as little as four to eight hours, à la a Sprint Planning Meeting, it should be obvious whether or not anyone understands the problem well enough to go ahead and attempt a software solution.

. . . there is nothing like a tested, integrated system for bringing a forceful dose of reality into any project. Documents can hide all sorts of flaws. Untested code can hide plenty of flaws. But when people actually sit in front of a system and work with it, then flaws become truly apparent: both in terms of bugs and in terms of misunderstood requirements.

This essay by Turing Award-winnerFred Brooks is almost 20 years old now. Sadly, the ideas on incremental development are still considered outside the mainstream in IT, which continues to favor the widely-discredited waterfall approach.

Much of present-day software acquisition procedure rests upon the assumption that one can specify a satisfactory system in advance, get bids for its construction, have it built, and install it. I think this assumption is fundamentally wrong, and that many software acquisition problems spring from that fallacy.

We were doing incremental development as early as 1957, in Los Angeles, under the direction of Bernie Dimsdale [at IBM’s Service Bureau Corporation]. He was a colleague of John von Neumann, so perhaps he learned it there, or assumed it as totally natural . . .

All of us, as far as I can remember, thought waterfalling of a huge project was rather stupid, or at least ignorant of the realities. I think what the waterfall description did for us was make us realize that we were doing something else, something unnamed except for “software development.”

— Gerald M. Weinberg

In his book, Agile and Iterative Development, [Craig] Larman has well documented the history of the many disasters introduced by accident when the Department of Defense standardized on a non-iterative method that was unproven on large projects. It was essentially a blunder by a consultant who had little experience with real software development.

The DOD has long since abandoned the waterfall method, and the consultant has recanted, but the waterfall approach persists as an urban myth in many software development organizations.

Ill-specified systems are as common today as they were when we first began to talk about Requirements Engineering twenty or more years ago. Yet the task of creating complete and perfect specifications is not rocket science. We have adequate and comprehensible theories at our disposal for specification of finite state automata. We have proceeded over the past decades to develop and refine a discipline of applying these theories to real-world systems. In our methodological focus, we may have lost sight of some endemic problems that plague not the process but the people who do the process. Is it possible that an engineering approach to requirements is as badly suited to our real need as would be an engineering approach to raising teenagers? I’m beginning to think so . . .

There are zillions of books on how to raise kids, and I think my wife has read most of them, along with countless magazine articles . . .

In fact, she’s far more open to taking child-rearing advice from books and magazines than she is from me, despite the fact that none of the authors has ever met our kid, and can therefore offer no insight into our particular situation.

But how comforting must it be to think that there’s a “methodology” for raising kids or for building good software — that someone has already solved all the hard problems for us . . . that we don’t have to rely solely on our own judgment to make critical decisions when we have only a limited amount of time, a limited amount of information, and no certain knowledge of the consequences . . .

Thus spoke The Programmer.

Related Links

Peopleware
If I could require that everyone in the IT business read one book, this would be it. Tom DeMarco (see above) is one of the co-authors.