Chris Leary

Big design vs simple solutions

The distinction between essential complexity and accidental complexity is a
useful one — it allows you to identify the parts of your design where you're
stumbling over yourself instead of working against something truly reflected
in the problem domain.

The simplest-solution-that-could-possibly-work (SSTCPW) concept is inherently
appealing in that, by design, you're trying to minimize these pieces that you
may come to stumble over. Typically, when you take this approach, you
acknowledge that an unanticipated change in requirements will entail major
rework, and accept that fact in light of the perceived benefits.

As a more quantifiable example: if a SSTCPW contains comparatively less code
paths than an alternative solution, you can see how some of the above merits
could fall out of it.

This also demonstrates some of the appeal of fail-fast and crash-only
approaches to software implementation, in that cutting out unanticipated
program inputs and states, via an acceptance of "failure" as a concept, tends
to hone in on SSTCPW.

Contrast

In my head, this approach is contrasted most starkly against an approach called
big-design-up-front (BDUF). The essence of BDUF is that, in the design process,
one attempts to consider the whole set of possible requirements (typically
both currently-known and projected) and build into the initial design and
implementation the flexibility and structure to accommodate large swaths of
them in the future, if not in the current version.

In essence, this approach acknowledges that the target is likely moving, tries
to anticipate the target's movement, and takes steps to remain one step ahead
of the game by building in flexibility, genericity, and a more 1:1-looking
mapping between the problem domain and the code constructs.

Benefits cited usually relate to ongoing maintenance in some sense and
typically include:

Reuse via genericity.

Flexibility for feature addition.

A more robust model of the problem domain imbued in the program.

Head to head

In a lot of software engineering doctrine that I've read, been taught, and
toyed with throughout the years, the prevalence of unknown and ever-changing
business requirements for application software has lent a lot of credence to
BDUF, especially in that space.

There have also been enabling trends for this mentality; for example, the
introduction of indirection through abstractions has monumentally less cost on
today's JVM than on the Java interpreter of yore. In that same sense, C++ has
attempted to satisfy an interesting niche in the middle ground with its design
concept of "zero cost abstractions", which intend to be known-reducible to more
easily understood and more predictable underlying code forms at compile time.
On the hardware side, the steady provisioning of single-thread performance and
memory capacity throughout the years has also played an enabling role.

By contrast, the system-software implementation doctrine and conventional
wisdom skews heavily towards SSTCPW, in that any "additional" design reflected
in the implementation tends to come under higher levels of duress from a
{performance, code-size, debuggability, correctness} perspective. Ideas like
"depending on concretions" — which I specifically use because it's denounced
by the D in SOLID — are wholly accepted in SSTCPW given that it (a) makes the
resulting artifact simpler to understand in some sense (b) without sacrificing
the ability to meet necessary requirements.

So what's the underlying trick in acting on a SSTCPW philosophy? You have to do
enough design work (and detailed engineering legwork) to distinguish between
what is necessary and what is wanted, and have some good-taste arbitration
process to distinguish between the two when there's disagreement about the
classification. As part of that process, you have to make the most difficult
decisions: what you definitely will not do and what the design will not
accommodate without major rework.