With an "Elegance" value of 1 representing a perfectly elegant solution (i.e., a solution in which all the solution complexity is essential, and none is accidental), and a value close to 0 representing a big
ball of mud. (A value of 0 is considered impossible, as any solution must have some essential complexity.) This is an equation, but because this is an equation describing a subjective phenomenon
(the perception of elegance), it's built with subjective terms (essence, accident, complexity). Still, we're all humans, we're all more similar than we are different, it's likely that we'll share the
same basis for deciding what is complex, what is essential, and what is not.

This is my basis: perceived complexity is proportional to the amount of effort one must spend in developing understanding of the system under study. (I won't go further into it here, but note that the word "perceived" is used to point out that the experience of complexity will vary with the observer.) Essential complexity is recognizable as that part
of the implementation that teaches you something about all possible solutions, while accidental complexity is what's left.

In other words, a design (including implementation code) is recognizable as elegant when it teaches you about the domain. An elegant design can be used as a teaching tool.

Monday, March 10, 2014

LtU pointed me to Design Principles Behind Smalltalk. I'll quote the first design principle named, because it illustrates what I think is a mistake in reasoning common to those of a mathematical bent:

Personal Mastery:If a system is to serve the creative spirit, it must be entirely comprehensible to a single individual.

The point here is that the human potential manifests itself in individuals. To realize this potential, we must provide a medium that can be mastered by a single individual. Any barrier that exists between the user and some part of the system will eventually be a barrier to creative expression. Any part of the system that cannot be changed or that is not sufficiently general is a likely source of impediment. If one part of the system works differently from all the rest, that part will require additional effort to control. Such an added burden may detract from the final result and will inhibit future endeavors in that area. We can thus infer a general principle of design:

Good Design:A system should be built with a minimum set of unchangeable parts; those parts should be as general as possible; and all parts of the system should be held in a uniform framework.

I strongly sympathize with the point of view outlined here. If one can master simple, general principles, then that reduces the burden for understanding some set of more specific ideas, and can potentially greatly increase the number and scope of the ideas one can understand and use at any given time -- it can improve one's intellect.

That said, it is considerably more difficult to impart understanding of general ideas than of specific ones. If this isn't immediately obvious to you, consider the order in which you learned some mathematical concepts. Take the following problems:

If joey has three apples, and gives two away, how many does he now have?

Solve for x: x = 3 - 2.

Prove that the addition operation under the set of integers modulo some constant forms a group.

Give an example of a non-Abelian group.

What the hell is a left-adjoint functor?

I think you can be expected to gain mastery of each of these problems in the same order in which the problems are listed. Each problem is more abstract than the previous, and each successive problem is, in a sense, simpler and more general than the previous. But each successive problem is also, to my mind, more difficult than the previous: We needed to understand the more specific ideas before we could be expected to generalize. The mechanism for "good design" quoted above can probably be considered to be in some tension with the stated goal of maximizing "personal mastery" of the system.

The implications of this tension are, I think, important. In particular, under the assumption that a more abstract understanding of a problem domain can make the problem more tractable, it's usually in any given author's interest to move "up" the "abstraction ladder", in order to better solve a problem herself. To the extent that this means she happens upon a good solution faster than others, this is all to the good. But to the extent that this means she happens upon much different solutions than others would, then however much more "elegant" her own solution is will be weighed against the cost in comprehension for others she works with.

Tuesday, March 4, 2014

Given that, as software designers, we don't know exactly what we're doing, we must be learning what we're doing as we go. By implication, design can be (and, in my opinion, should be) approached as a learning opportunity: the design process should support learning as much as possible about the form of the system under development, as rapidly as possible. An effective design process will include an effective learning environment. (For the record, I also subscribe to Jack Reeves's idea that code is design. As such, I think this idea also applies to software construction.)

Learning is facilitated when there is rapid and easy visibility into the link from action to consequence. This is partly (primarily?) why methodologies like test-driven development, continuous integration, and continuous delivery focus on decreasing the turn-around time from making a change to the system, to seeing the results of that change: by seeing the results of a change while the thoughts that lead to the change are still in working memory, it becomes much easier to see why the change did or did not have the desired effect. Sometimes, an automated test will give the fastest form of feedback from idea to consequence... More often, the fastest form of feedback comes from thinking through the consequences (thinking "if I change this loop's end-condition, it should resolve my off-by-one error" will usually be faster than making the change and re-running the test), with the automated tests acting as a check on the model of the software you maintain in your mind as you work.

Big Design Up Front will often fail because it creates a large lead-time between action (the "big design") and consequence (the built system). On the other hand, no design up front can also break down in cases where systemic consequences of early design decisions do not become visible until late in the construction process, which can give rise to issues that cannot be resolved cheaply or easily. In either case, the fundamental issue is that a decision about the correct sequence of software construction activities has cut off opportunities for determining the most natural way to build the system under development. And we get less maintainable systems as a result.

If, instead, we focus on design (in a broad sense, including requirements elicitation and coding and maintenance) as an opportunity to learn what the shape of our created system should be, then we are encouraged to ask the most significant questions first. (What do our customers want? What are the fundamental concepts in our problem space?) We are encouraged to test our ideas as soon as we can. (Following TDD to test the implementation. Building the Minimum Viable Product to test the market for our ideas.) We are encouraged to reflect on the outcome of having our ideas tested. (Refactoring towards greater insight [Evans, 2004]. Pivoting to serve a different market.) In other words, we are encouraged to train our focus where it will provide the greatest value.

In my opinion, there is no single correct answer to the question of "how much up-front design should we do?". Rather, ask "how can we learn the most about what our system should look like, as quickly as possible?". The answer to that question will be measurable in some quantity of design and implementation work, and that is where you should spend your focus. Software is operational knowledge. To produce novel software, we must gain knowledge. We must learn new things.