Summary
Doing the simplest thing that could possibly work is a frequent advice of the agile development movement. But how applicable is that advice in different kinds of development contexts?

Advertisement

One of the key tenets of agile development is to do the simplest thing that could possibly work. By its very definition, this dictum is hard to disagree with: after all, who would want to do deliver a solution more complex than it should be?

Test-driven development provides a practical path to following the simplest-thing-that-could-work principle: Write one unit test that exercises a simple function or method, and then implement that method in a most minimal manner that lets the test pass.

While few would argue for complex solutions in place of simple ones, most business systems tend to be anything but simple, both in terms of code and features. Abstraction is a proven way to deal with the unavoidable complexity of many enterprise business requirements: Abstraction allows common functionality to be implemented in one place and reused throughout the system.

But abstraction can also work against simplicity: It may take more work to apply abstract concepts to concrete, immediate problems. If more code needs to be routinely written to bridge the gap between highly abstract concepts and simple use-cases, that extra code will, over time, add complexity to the system, and make the system more difficult to understand.

In a recent blog post, Jay Fields explores the intersection of the simplest-thing-that-could-work with the real requirements of enterprise systems:

Often the emphasis of the phrase is like so: the simplest thing that could possibly work. However, recently ... [I] was discussing the idea that the emphasis should be: the simplest thing that could possibly work. The difference is only emphasis, but it's a change worth considering...

I believe there are occasions where the simplest thing that could possibly work is writing 10 lines of code today that do what you need, and deleting them tomorrow in light of new requirements...

Also, painting yourself into a corner when you can jump over the paint is fine, but if someone else keeps painting on the other side, you may not be able to get out. The small amount of code that we wrote can be rewritten in about an hour, but if someone else was going to add to that code then we would have probably gone down the more complicated path. No one was building on our code so we weren't as worried about the foundation we put in place...

While the point Fields raises applies to almost any sort of software project, it is even more relevant to projects that aim to develop a product. In many consulting engagements, by contrast, the definition of "possibly"—in possibly work—can be set rather low: For instance, if the goal is to demonstrate the next working milestone of an application, it is possible to write some code that, while perfectly meeting the immediate requirement, will most likely have to be thrown away. In a product development environment, that sort of effort would be considered waste.

As someone responsible for hiring consultants and, in a different context, as a consultant, I have seen that doing the simplest thing that could possibly work can be counterproductive. Starting out with the simplest thing, and then refactoring to more abstract patterns may seem like a sensible thing to do, but domain expertise can often, and luckily, shortcut several iterations of refactorings: Familiarity with a domain means that an architect can design a bit with the future in mind, since he has, in a sense, lived that future already. I've also found that organizations are not always tolerant of a developer's desire to refactor code to better abstractions: Once a feature is in place and demonstrably works, managers may tend to move a developer onto new projects and new features. "Post-release entropy reduction," to use Luke Hohmann's term, while extremely desirable, is not always possible.

In your projects how do you decide between doing the simplest thing that could possibly work and designing with the future needs of the project or code base in mind?

He pointed out that when he was saying this, it was a *question*, not a command. It was a way to get unstuck by getting something into code, after which you could change it if you wanted. But the act of writing it down could help your understanding of the problem.

A simple dictum like "Do the simplest thing that could possibly work" is always just that: simple. It's scope is very narrow: it only applies to the situation where all other guidelines and practices have already been followed. In any other situation, it will be at odds with these other guidelines, including other key tenets like 'Embrace (and anticipate) change'. In practical terms: the simplest solution involves too few abstractions to be prepared for change.

These 'oneliners' are guidelines that should be remembered on the relevant occasion, for instance when one finds himself tempted to overengineer. They should never be strictly adhered to.

I think the main problem with this catch-phrase is that there's no commonly agreed upon way to judge the relative complexity of solutions.

To give you an example that isn't completely on-topic, I've been told by COBOL developers that COBOL is a very simple language. As someone who has looked at the grammar of COBOL, I think it's extremely complex. Am I right? Are they? I not convinced there's an objective measure that would settle the dispute. 'Simple' means different things to different people.

Take the following trivial example:

boolean test;

if (someCondition) { test = false;} else { test = true;}

vs:

boolean test = !someCondition;

I would wager that most experience Java developers would favor the second and say it's simpler. But a lot of novice developers would say the first is simpler because it's obvious what it does. It lays it all out. The second one required understanding that expressions have values. The experience developer sees a lot of unnecessary code in the first example. It's more complicated than it needs to be.

The reason this is a problem is that these kinds of sayings can be used as bludgeons in a disagreement over approach. It you accept the premise of the cliche (as many developers would) then someone can arbitrarily say their preferred solution is simpler so it should be preferred instead of discussing the relative merits of the choices.

In my experience, a good fraction of all programmers (often among the smartest ones) are horrendous over-coders. A recent example that comes to mind was a very smart programmer who created some database code for a small database management feature where all the database-specific parts were abstracted out (using Java generics no less).

This would have been great had we been writing a commercial product that needed to run against different database flavors, but this was an internal app that was unlikely to ever run against anything but MySQL (no long range plans to ever switch, and lots more code than this small package would need to be changed).

The problem wasn't that the code didn't work, but that we had a large team and this highly abstract, generified code took several hours to wrap your head around, making everyone else avoid touching it if they could, and slowing down their progress if they couldn't.

I think the "simplest thing that could possibly work" rule is aimed squarely at situations like this. To me, it says: try to keep things as simple and straightforward as possible so that when some poor schmuck has to work on your code, they're not muttering "what @$&-#*|% wrote this!"

I don't think the rule says "put blinders on and don't look at the big picture".

If you know a key feature of your app is importing data from various formats, and you know many different members of your team will be writing data importers, it's likely very worthwhile to put some effort in up front to create a well structured data import library or framework, so you don't end up with a dozen importer modules written a dozen different ways that don't share any common code.

But if all you need for the foreseeable future is to import .csv files, writing a big abstract data import framework is negative productivity -- you waste your time now, and more time in the future by making the code harder to understand and harder to change.

"Simplest thing that could possibly work" is a good (and political correct) catch phrase for part of a philosophy that has been known for a long time. The KISS acronym is -I think- a more expressive form of it. Most people put the emphasis on prevention which is important but in the end what really matters is the simplicity of the end product.

Simplicity is at odds with abstraction and indirection that allows to describe things using fewer words and make systems more flexible. It is the job of the Software engineer to find the correct balance.

And now for the pop quiz you can try to guess who wrote the following:

"It seems that perfection is reached not when there is nothing left to add, but when there is nothing left to take away."

"Everything should be made as simple as possible, but no simpler."

"All problems in computer science can be solved by another level of indirection.""...except for the problem of too many layers of indirection."

> The problem wasn't that the code didn't work, but that we> had a large team and this highly abstract, generified code> took several hours to wrap your head around, making> everyone else avoid touching it if they could, and slowing> down their progress if they couldn't.

Obviously, I haven't seen the code that you are referring to but from your description, it sounds like the real problem is that this abstraction wasn't well done or wasn't appropriate for the context.

If the abstraction was easy to understand, simplified your development and maintenance and saved you time over the long run, would you still say it was a bad choice? The non-sequitur that I repeatedly contend with is that because developer X tried to create an abstraction and that abstraction was ultimately a failure, no one else should attempt to solve the same problem with an abstraction. When a bridge collapses, you don't hear people saying bridges are a bad idea.

Having said that, I have also seen a lot of overly-complex abstractions. The thing is that the abstraction isn't the problem, it's the overly-complex thing that's the problem. If you look at the phrase "the simplest thing that could possibly work", there's nothing that would suggest there's a problem with a non-complex abstraction. What I see often, however, is that 'abstraction' and 'complex' and seen as equivalent when in fact really good abstractions simplify development.

I don't want to attribute this frame of mind with you, Don or anyone else in particular. My point is that simple catch-phrases like this are not enough to convey the depth of the issue at hand. They can be used as reminders or a short-hand but without the larger understanding they become useless or worse.

> <p>Test-driven development provides a practical path to> following the simplest-thing-that-could-work principle:> Write one unit test that exercises a simple function or> method, and then implement that method in a most minimal> manner that lets the test pass.</p>>

I think this is the major myth of TDD. Passing or Failing a unit test says nothing about the complexity of a method. It's really open loop control as far as complexity is concerned.

Incrementally adding functionality with unknown complexity at each step is not a good algorithm for creating the simplest code. In fact, sometimes the simplest and most elegant code comes from understanding the big picture and implementing a holistic solution.

It's certainly possible to derive a good solution using TDD, I just don't think it has any unique ability to support it.

> I think this is the major myth of TDD. Passing or Failing> a unit test says nothing about the complexity of a method.

This seems so obvious to me that I don't really understand why it's even something that you or anyone else needs to point out. All you have to do is consider a prime number generator and realize that it's nonsense.

I don't know where this idea that test cases can define functionality came from but it's really annoying. I'm currently contending with a dogma that test cases are adequate functional specs.

> > I think this is the major myth of TDD. Passing or> Failing> > a unit test says nothing about the complexity of a> method.> > This seems so obvious to me that I don't really understand> why it's even something that you or anyone else needs to> point out. All you have to do is consider a prime number> generator and realize that it's nonsense.> > I don't know where this idea that test cases can define> functionality came from but it's really annoying. I'm> currently contending with a dogma that test cases are> adequate functional specs.

I think you answered your own question. I point out this myth because many people believe in it and I don't to be subject to the dogma that you're grappling with.

> I think you answered your own question. I point out this> myth because many people believe in it and I don't to be> subject to the dogma that you're grappling with.

Sorry, I didn't make myself clear. I wasn't wondering why you point it out. It's clearly necessary to do so. But why? In general, why do we have a daily dogma to contend with? It's well known that many software projects fail. I'm starting to think that the main problem is that evidence and theory are given the same weight. I guess I'm just ranting on a culture where all arguments are considered equally valid no matter how flimsy they are.

> My point is that simple> catch-phrases like this are not enough to convey the depth> of the issue at hand. They can be used as reminders or a> short-hand but without the larger understanding they> become useless or worse.

I agree.

The problem I described wasn't that the abstraction was poorly done (it was quite good in fact). Using that module was easy enough. The problem came when others needed to dive under the hood to tweak or extend it.

The programmer had invented a requirement for this code ("should be easy to add support for other databases") and had separated the functionality into 15 classes where five were needed. Had this been a real requirement, the rest of the team would have been happy -- you needed to implement just one interface to add support for another database.

But this wasn't a real requirement. The "simplest thing that could possibly work" became a starting point for a conversation among team members about the requirements and complexity of this bit of code.

> I think this is the major myth of TDD. Passing or Failing> a unit test says nothing about the complexity of a method.> It's really open loop control as far as complexity is> concerned. > > Incrementally adding functionality with unknown complexity> at each step is not a good algorithm for creating the> simplest code. In fact, sometimes the simplest and most> elegant code comes from understanding the big picture and> implementing a holistic solution.

I think an important step in TDD that is often omitted or overlooked is refactoring. To my understanding, TDD is done in the following steps:

1. Write a test2. Write the simplest code that makes all the tests pass3. Refactor4. goto 1 until you can't think of any more test cases for your requirements

Part of refactoring is looking at the bigger picture and seeing where your code fits in.

> It's certainly possible to derive a good solution using> TDD, I just don't think it has any unique ability to> support it.

Certainly TDD isn't the only way to go. But it's a good technique to focus your mind on the interface to your code and on your requirements.

> But this wasn't a real requirement. The "simplest thing> that could possibly work" became a starting point for a> conversation among team members about the requirements and> complexity of this bit of code.

I get where you are coming from but I think 'don't implement requirements that don't exist' is not the same thing as saying 'the simplest thing that could possibly work'. I can believe that a lot of people equate these two but they are really quite different statements. You could probably get to the first one from the second but there are a lot of other conclusions that could be drawn. I also love the 'possibly work' part. I personally prefer things that will 'definitely work' but that's just me I guess.

I've actually written a API to work with databases to abstract out a lot of error prone bolierplate. It's a lot easier to work with. I've already saved myself and other many hours of development by doing this. If someone needed to make some changes to this code, I can imagine they might be overwhelmed. They might curse me for creating something that they feel they don't understand well enough to change.

To be quite frank, I really don't care. I've been developing too long to cut myself off at the knees to make sure some hypothetical idiot developer understands what I've done. It makes no sense to me to limit myself to doing things that I know for a fact create more problems like repeating myself or coupling large swaths of code to 3rd party libraries. These are things that I'm told I should do to 'keep things simple'. They way I see it, 'simple' is a synonym for 'stupid' and I'm not interested in being stupid no matter how many catch phrases someone throws at me.

Again, I haven't seen the code you are talking about. From your description, it doesn't solve any real problems so that would make it a cost with no benefits and therefore unwise. And trust me, I've seen over complicated designs. I've dug through some really crazy shit wondering why the 'experts' that wrote it didn't spent some time making the code actually work instead of creating functionality that was never used.

I guess this whole topic kind of annoys me because I hate being limited by the failures of people I don't even know. Where human technology be if that's how the world worked? We'd still be in trees and caves. When it comes down to it, it's about experience. No number of cliches will make up for a lack of experience.