In Chapter═1 we
observed that human beings are better at visualizing data than they
are at reasoning about control flow. We recapitulate: To see this,
compare the expressiveness and explanatory power of a diagram of a
fifty-node pointer tree with a flowchart of a fifty-line program. Or
(better) of an array initializer expressing a conversion table with an
equivalent switch statement. The difference in
transparency and clarity is
dramatic.[97]

Data is more tractable than program logic. That's
true whether the data is an ordinary table, a declarative markup
language, a templating system, or a set of macros that will expand to
program logic. It's good practice to move as much of the complexity
in your design as possible away from procedural code and into
data, and good practice to pick data representations that are
convenient for humans to maintain and manipulate. Translating
those representations into forms that are convenient for machines
to process is another job for machines, not for humans.

═

Another important advantage of higher-level, more declarative
notations is that they lend themselves better to compile-time checking.
Procedural notations inherently have complex runtime behavior which is
difficult to analyze at compile time. Declarative notations give the
implementation much more leverage for finding mistakes, by permitting much
more thorough understanding of the intended behavior.

═

--Henry Spencer

═

These insights ground in theory a set of practices that have
always been an important part of the Unix programmer's toolkit —
very high-level languages, data-driven programming, code generators,
and domain-specific minilanguages. What unifies these is that they
are all ways of lifting the generation of code up some levels, so that
specifications can be smaller. We've previously noted that defect
densities tend to be nearly constant across programming languages; all
these practices mean that whatever malign forces generate our bugs
will get fewer lines to wreak their havoc on.