Thanks for the link. Seems overall pretty good for this kind of thing, though I'd say it's pretty near impossible for anything like this to get "very good" or better -- just too many opinions/intangibles/etc. Anyway, I >>especially liked<< what seems to be a typo: "Doesn’t let [the] good be the enemy of perfect." His subsequent remarks suggest he meant it in the usual vice-versa way. But I kind of like it the way he said it. Reminds me of basketball coach John Wooden's advice: "Make every day your masterpiece." And I feel programmers should at least aspire to this, though maybe not achieve it.
– John ForkoshJul 2 '16 at 7:44

5 Answers
5

This is clearly a misunderstanding, the author does not mean "pattern" in the sense of "GOF design pattern". He does not even talk about "patterns in your code", but patterns in the problems you are going to solve with your code.

So to express his recommendation in other words: one should try to write code which solves a whole category of problems instead of writing code which solves only one very specific case of a problem.

Here is a trivial example: do not write code for filling an array of size 10 with numbers 1 to 10, even if that is currently the requirement you have. Better write a function which fills an arbitrary array with numbers from 1 to sizeof(array). That piece of code would have equal size and is not more complicated, but it can be reused much easier, it can be tested easier and it solves the "wider category of problems".

Of course, in real situations, to find the correct category (or "pattern") needs some experience and a bit of talent, that is what the article is talking about. One should add to that paragraph that there is a certain risk of overgeneralizing, and it is important that one should strive to generalize only when it does not make the code unnecessarily complicated (the best generalizations make the code simpler).

This is pretty vague, and the author doesn't give any specific guidance. I'm pretty sure he does not mean coding by implementing one design pattern after another. I think he means that it is often easier to write the general case than the specific case.

Here's a recent example from my own work. We are processing data from an external source. There are multiple processing steps and at each step some of the input data may be rejected. We need to collect all the erroneous records.

The legacy code contains multiple implementations of this patten, and it's not easy to get any one case right. So one of my team members extracted this pattern into a general method that executes functions in sequence and collects all the exceptions for later reporting. The general case was easy to write and test, and then all the specific cases were greatly simplified. It is often easier to write and test more general code, because being general, it has no dependencies.

Developers with experience only in OO/Procedural languages often miss these opportunities to generalize common control patterns. They see the duplication in computations, but miss the duplication in control.

Most programs have patterns of code that will be required several times, possibly with variations. If you can write code to handle the pattern, you greatly simplify development as you don't have to rewrite similar code. Consider these examples:

An application reads and writes text files at several points in the execution. You have couple of options:

Add the code to open and read the relevant file everwhere you need the code.

Write functions that read or write a file with the parameter filename and text (in whatever format is used by the application). Use these where you need to read or write a file.

The second case is simpler to code (after the first file). It is also far easier to test.

The application needs to present many forms on a text terminal (could be generalized). Forms need to provide capability to navigate from any field.

Code each form and each field validation as required.

Create a function that presents the form based on a data structure containing relevant data (prompt, field type (character, number), field size, etc.) and return a structure containing the fields. The function contains the appropriate navigation logic (previous field, field #, top, etc.). Create the data structure for each form, and use it when form data is required.

These are two of a set of functions that cut development time on one project to 25% of the estimated time. Neither functionality was readily available at the time.

The wording in that article is quite unfortunate, because it could be interpreted to mean "find well-known software patterns, and stitch them together to create a working application." That is an approach commonly used by inexperienced developers, and it works, provided you understand the patterns, how to apply them properly, and what their appropriate use case is. Inexperienced developers often reach for this approach too soon, before they fully understand the implications.

Experienced developers are capable of creating their own code patterns.

By way of explanation, I'll provide an example. Let's say you have the following code:

The dictionary doesn't have to be hard-coded, like the switch statement does. You can choose to load the dictionary from any compatible data source (including XML configuration files, a database or even a remote data feed), and it will still produce values the same way.

You can add new key/value pairs to the dictionary, or delete values that are no longer needed, at runtime. It doesn't have to be a fixed list.

The dictionary will scale better (it will maintain good performance even if it has a large number of entries).

The Dictionary is a powerful software pattern that has a wide range of uses. It is a generalized data structure; you can choose the types it operates on, and be guaranteed a certain level of performance from it. You can even have dictionaries of dictionaries.

For larger data storage and retrieval jobs, the experienced developer knows to reach for a database, because it's proven technology that provides certain guarantees about the way it works.

This is what the blog post means by "write code for patterns, not specific instances."

Writes code for patterns, not specific instances – it’s an intuitive skill that can’t really be learned, in my opinion. You either have it or you don’t. Really talented developers infer the patterns in a system, even before a module or a piece of code is written and they code for those patterns, rather than specific instances. This skill limits technical debt and results in a tighter, more usable codebase.

That's rather vague.

I'll use one of my most favored and at the same time least liked pieces of software ever as an example, which would be git. One of the key reasons git has been so successful as a distributed version control system is that the authors of git saw the pattern. Git only tracks four kinds of objects: Commits, tags, trees, and blobs. Add in the concepts of branches and remotes and that's git at it's core. The authors of git saw this pattern, and they did so very early in the development of git. Recognizing the pattern very early on in the development of git was key to git's success.

One of the key reason git is so disliked by some is that the authors did not see the pattern in how people use version control systems or in how the git command structure is patterned. This failure has resulted in a rather complex and rather large command structure with non-intuitive arguments, sometimes with commands that are duplicative and/or inconsistent. This results in the git UX being rather unpleasant.