The University of Flatland uses a novel two-part practical test to determine whether Comp. Sci. undergraduates should be steered into practical programming or abstract theory streams. In the first part, the students are given a beaker, a bunsen burner, a stand, and a flask of water. They are told to boil water. Naturally, they fill the beaker, place it on the stand, light the burner, and are queued up to perform the second test.

In the second part of the test, they are presented with a beaker of water sitting on a stand over a Bunsen burner, and an empty flask. Most of the students light the burner and are led away to begin their studies in programming. But a precious few disassemble the apparatus and pour the water back into the flask, reducing it to a problem they have already solved. They are led away to begin the long road to their Ph.D. in Lisp, Recursion, and Category Theory.

Like most the jokes I retell, this is not particularly funny. But let’s talk about “Design Patterns” and then come back to it. Design patterns allow developers communicate their intentions to each other with a common vocabulary.

This makes sense. If I create a Flyweight and I want to describe it to another developer, having a word for it, along with a common understanding of what problem a flyweight solves, streamlines our communication. Design patterns in that light are jargon, a sub-language used by specialists to discuss their speciality.

If it stopped right there, you would have the design patterns invented by Christopher Alexander and articulated in the incredible book A Pattern Language.

But it doesn’t stop there. In certain programming cultures, people consider Design Patterns to be a core set of practices that must be used to build software. It isn’t a case of when you need to solve a problem best addressed by a design pattern, then use the pattern and refer to it by name. It’s a case of always use design patterns. If your solution doesn’t naturally conform to patterns, refactor it to patterns.

Is this madness? Yes… And no…

Consider Scheme. Everything in Scheme is built out of just five primitive “special forms.” It is positively Forth-like in its economy of power. So it is clearly possible to build very powerful programs out of five primitives. So why not build programs out of thirty-five patterns?

While we digest that, back to the joke. Scheme programmers are clearly the impractical theoreticians, aren’t they? Reducing everything to their five special forms that they have already solved and what-not. While the practical programmers look for the simple, direct way to boil water.

When you take a problem with a straightforward solution and make it more complex by re-expressing it as a combination of patterns, you are pouring the water back into the flask.

So what do we make of the “everything should be one of these thirty-five standard design patterns” argument? I make of it the same thing that I make of the joke. It is clearly possible. But when you take a problem with a straightforward solution outside of the core patterns in one book and make it more complex by re-expressing it as a combination of patterns, you are pouring the water back into the flask.

The argument that “everything should be one of these thirty-five standard design patterns” is an argument that fits the theoretician, not the pragmatist. It is motivated by a desire for building very complicated things out of very simple parts. That is possible. But it is a fallacy to believe that simplifying the constituent parts simplifies the software. Like boiling water, you can make it more complex when you place the pattern ahead of the solution.

Of course there are arguments ad nauseam about standardization and readability. But I have this strong suspicion that at the core of it, the motivation is a belief that the world should be reduced to a simple set of easy-to-understand things that can be combined and recombined into complex solutions.

And Scheme programmers? You may have noticed that although Scheme programs are built out of the five special forms, Scheme programmers do not write everything in the five forms: they use abstractions like continuations, macros, and functions to write expressive and powerful programs.

If you re-wrote a complex Scheme program in the five primitives, would it really be easier to understand because one programmer could describe it to another using just five words in their common vocabulary?

Scheme’s five special forms: The ones I was thinking of are define, lambda, if, quote, and set!. And I’m not convinced you need define. This is from memory, and furthermore just because you can build everything from a few primitives doesn’t mean that that’s how the implementation works. Smalltalk took this aggressive approach to building Smalltalk in Smalltalk, but I am not immersed in Scheme, I do not know what current implementations actually do.

If you do a little Googling, you will find that people often refer to constructs like let as special forms, because they are not function calls. However, they are not primitive special forms because you can build let out of lambda and function calls.

Given the primitive special forms, you also need a library with functions like equals, car, and cdr defined. Between the primitive forms and the primitive library, you can define evalmetacircularly and define-syntax for all of the other special forms. From there, you can build a modern Scheme in Scheme.

(There may be some other good choices for building “a Scheme.” You may want to consider call/cc a primitive special form. If you don’t, you have to rewrite function calls (probably using CPS), and this means you are no longer running your Scheme programs in your primitive Scheme, but rather in your evaluator.)

There's another big difference between them: the scheme forms are orthogonal to each other, while the design patterns aren't. It's much easier to compose orthogonal abstractions than to constantly decide which form of object creation is better.

The lack of formalism in design patterns is, most of the times, a serious downside. People are constantly inventing new design patterns that are trivial combinations of parts of existing ones. As it's more common in OOPLs than in FPLs I reckon it's because OO lacks a core formalism like the lambda calculus and algebras. The most similar thing in the FPL camp are algebraic and categorical concepts (e.g. monads, arrows, morphisms) but as those are formally defined it's possible to see the relations between them, so there's the distinction between core and derived concepts.

OTOH pouring the water back into the flask is great in pure languages because we don't need to boil the water twice, we just share the earlier result ;)

Your comment is exactly what I hope for when I write: someone to come along and take things to the next level.

Thank you.

There's another big difference between them: the scheme forms are orthogonal to each other, while the design patterns aren't. It's much easier to compose orthogonal abstractions than to constantly decide which form of object creation is better.

There's something else, although this may not apply to Scheme. Certain core concepts allow us to perform reasoning about the behaviour of the program.

Three obvious examples: 1. If you ban pointers and stack variables, you can supply automatic garbage collection. 2. If you ban mutability, you can perform lazy evaluation, automatic vectorization, and other optimizations. 3. If you ban certain forms of dynamic programming, you can perform static type checking.

The ad-hoc nature of the design patterns do not provide any benefits like this. You cannot say, "If a program is strictly limited to the use of this set of patterns for these situations, we can prove this and this and this about it, we can guarantee this about it."

As for the actual discussion, there's a direct relationship between restricting a set of constructs and how productive we are using them. It's just a matter of finding the correct subset, but I saw it over and over again, in Smalltalk, Forth, Scheme, Haskell98 (+multiparameter type classes, the extra stuff is sometimes too much to reason), Ruby on Rails (not Ruby in itself though), relational/pi/join/lambda calculus, regexes...

Certain core concepts allow us to perform reasoning about the behaviour of the program.

The extra reasoning always applies, even with the incorrect set of constructs, but some forms of reasoning are better than others. In a sense we (as a profession) still don't know if it's better to be implicit (e.g. GC) or explicit (e.g. monadic regions for memory management). AFAICS initially there's a incorrect explicit usage, followed by (usually much later) implicit usage, followed by correct explicit usage. The bad thing is that in most cases we still don't know the correct explicit usage.

The ad-hoc nature of the design patterns do not provide any benefits like this.

Design patterns are good to start intuitions, but awful to build theorems. I follow both the patterns people and the FPL community and it's very interesting to see how they are similar in their intuitions but different in their methods. Recently there was a discussion on the Haskell Cafe about how to structure the hierarchy of monads, arrows and such. Many of the arguments where drawn from intuition about a sense of beauty, but they where always ended up with solid theory behind them. Unfortunately we are still in the dark ages of software and the difference between science and superstition is still truly unknown. I find amusing and depressing that both "Patterns of Enterprise Application Architecture" and "Basic Category Theory for Computer Scientists" helped in my last project in roughly equal parts.

Great piece, Reg. Great comments, Daniel. And Daniel, I really think we are still in the dark ages of software. All the people I know or read that really seem to understand where we are with software are inveterate intellectual wanderers. I've begun to distrust anyone who isn't. So, I would think using both those books makes more sense than either alone.

We have no subtlety in our practice. We approach many software engineering problems head-on and find they are undecidable. We are rarely clear on where we can actually go , where we can't go now, and where we can't go at all.

Christopher Alexander wrote about design patterns in the late 1970s as you know. What you don't know is that he has been busily working on the problem of what constitutes good design ever since. In his lengthy (four part book) the Nature of Order, he describes his current views on the problem of design. What's interesting here is that Alexander has moved on from considering the surface-level patterns to start looking at the underlying principles. He also discusses what he calls 'generative processes' which he claims that when used properly will always guarantee good design results. (His generative process has a lot in common with the notions of step-wise refinement, top down design, and iterative development.)

Why is any of this relevant? Well, unfortunately, the computer programming profession borrowed design patterns and has been applying them without really understanding where this tool came from or why. Instead of searching for the underlying principles that render design patterns irrelevant, the field remains fixated on seeking out new or imagined patterns and slavishly copying them.

"Reduce everything to patterns" works fine if you catalog all of your patterns instead of just the complicated ones. Here are a few of my favorite patterns:

NV Named variablePF Parameterized functionCB Conditional block

Also I might add that the original task shows a legitimate distinction between theory and practice. Describing a solution and executing a solution have different costs, so different standards of elegance apply.

Most people consider Design Patterns to be a core set of practices that must be used to build software.

Most? Really most? That's a pretty bold claim. I've been a professional programmer for more than a decade, in the UK, USA and Singapore, in corporate IT departments, ISV and professional service companies and I've not seen this.

And I'm a "patterns" guy. I'm in that world: I've still got (and still use) my first edition GoF, I was a Portland Pattern Repository regular before (just before) all that XP stuff started on Wards wiki, I've even been to EuroPLoP and published patterns discovered in my own work, and I really haven't come across this. Where is it going on?

Patterns and OO are similar in this regard: inexperienced (or just slow) programmers can't grasp the necessary theoretical concepts to manage abstraction or good design in the general case, but they can rote-learn a set of specific blunt tools that are sufficient for most small problems.

You get the side-benefit that the constrained language gives them less scope for inventing new ways of getting things wrong, but the drawback that initial success often convinces them they're ready to take on more complex challenges without the general understanding, leading to some serious train-wrecks.

I found that Design Patterns (of the Gamma et al variety) were valuable in that they helped codify and solidify ideas that I had seen many times but didn't have a name for. Having a name helps provide a hook on which you can hang new knowledge and experience.

That said, I think a lot of the official Gang-of-Four design patterns are making up for lacks in the languages. Some of them are not needed, or pretty trivial, in Lisp (and probably also Smalltalk and Ruby and so on). Not all, though.

We could argue whether this applies to most, or most Java, or most BigCo, or most overall but not most who are Agile, or most who have heard the phrase "Design Patterns" but have never read the book, or most who have read the book but cannot name any pattern not included in the bok, or...