Why BDUF can be bad

published:
Wed, 23-Nov-2005
|
updated: Wed, 23-Nov-2005

So quick as a flash after I'd posted my last post
(KISS TDD hello),
an ex-coworker wrote "While I appreciate your thoughts on TDD, you
seem to dismiss BDUF [Big Design Up Front] as being unnecessary. [...]
Seems to me that SOME DUF is necessary to keep the TDD in some frame
of context, no?"

And, yes, he's perfectly correct. Some DUF is always necessary; you
have to know where you're going in order to select a good path,
otherwise you'd find yourself in the weeds.

The issue is this though: if you are designing something like a
skyscraper then you'd better jolly well have BDUF. If you don't, it
may collapse on itself. You may forget about the elevators or the fact
that skyscrapers sway in the wind. If you don't pay attention to
logistics, you'll have materials delivered way before they're needed
(and so you have to store them) or you may not have the materials
there when you need them (in which case, your construction workers
will be sitting around idle).

But writing business software is not like building a skyscraper (or a
car, or a widget). Software is infinitely malleable. You can change it
and recompile and your cost of goods is still essentially zero. The
problem is perhaps that people approach writing software as if it
weren't malleable. They write code monolithically. Their software
doesn't have nice plug-in or extensibility points, it's not decoupled,
it has dependencies all over the shop. It's written to a particular
design and if the design was later discovered to be wrong, oh well.

Better would be to embrace software's malleability right up front. Use
it to write better (that is, more efficient, less resources, more
modular, more easily testable) code. We certainly want to place some
constraints around this malleability (um, let's call these "tests") so
that the software still does something useful, useful as in the sense
it's what the user would want. But we want to be able to say early on:
oops, that was wrong, let's refactor; or, hey, this functionality
looks like that over there, so let's extract it into another class;
or, heck, this new requirement has come in and we need to adapt the
class model.

We still have a goal: the software must do X, but essentially we don't
want to preordain exactly how X is going to be built as a class model.
That's the issue with the MSTDD document: you have to work out the
classes you are going to use to implement X, presumably their various
interactions, and type it all up. Now, in theory, people will be using
design patterns to guide their thinking, or maybe the interfaces they
have available in the framework(s) they will be using, or maybe
certain architectural guidelines, but in practice developers tend to
type up the first class that comes to them.

They'll design that first class and implement it, and suddenly it's a
deadweight. From its inception and now existence it gains some
immobility and invincibility. The larger the class, the more immobile
it becomes, the more difficult to change or to push this way or that.
I'm not saying it's impossible, but in my experience bodies of code
tend not to be changed, no matter how unsuited they are to the task at
hand. So we glom new bits on, extend this and that, to add more
functionality. It's like a crystal nucleus dipped into a
supersaturated solution.

But TDD is predicated on refactoring, amongst other things. You don't
get wedded to code you've already written since it's likely to be
changed pretty soon. You practice at designing and coding classes and
methods, you practice at applying simple design patterns, you practice
at changing stuff. You gain confidence in your code and confidence in
your ability to see the patterns and duplication and to refactor the
model when needed. All the time though, you are guided by JEDUF (Just
Enough Design Up Front) which dictates what you are trying to build.
In XP-land JEDUF documents are called user stories, elsewhere they may
be called a specification.