Throw me an acorn, I’ll grow you a simulation

“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take”

– Antoine de Saint-Exupéry

We all have a tendency to overcomplicate things – it’s the endearing human trait of flogging dead horses – but the trick is to know when to cut loose poor solutions before it becomes too late. Take the pilot who was once asked what would make a completely safe plane, to which he replied “Oh, that’s easy. That would be if I could ask ‘which wing’ after being informed engine 43 had failed.” Of course, he knows as well as we do that the Spruce Goose theory of plane construction is, if you’ll excuse the pun, simply not going to fly. If we brush aside the pesky laws of physics we uncover the irony underneath: engine count is inversely proportional to safety above a surprisingly small number. The duplication of the single most complex component of the plane and the resultant wiring, structure and plumbing that arise are quite simply mind-boggling.

As for the cockpit, well, it’s best not to start. Unless it was staffed by Mr Tickle and his twin brother, it would be impossible to operate the eye-watering spread of buttons, dials, lights and levers. The real answer to the question “what would make a completely safe plane”, of course, is not to have a plane at all. The best one can do is to keep it simple and where simplicity isn’t possible have as much safety margin as you can muster. And as for engine count, on a modern airliner the correct value for count is between 2 and 4, preferably 2.

The golden hat of blame

Eighty-six engine planes all sound very silly but amazingly, much modern software is actually built this way: more and more engines are added and the wires, pipes and surrounding structures enhanced accordingly. All this happens seemingly without anyone sitting by a lake for a few hours and realising that perhaps fewer, more powerful engines might be an interesting idea or, maybe, just maybe more planes. It is this process of organic over-complication that has given us such gems as the “Ribbon Interface” in Microsoft Office products – a method of managing the complexity underneath rather than a wholesale attack on that complexity itself.

Of course, the golden hat of blame can be shared amongst almost every single component part of the process, but ultimately, an awfully large amount of hat goes to “feature-based development” rather than “problem-oriented development”. This generally manifests itself as a focus on adding new functionality in isolation from what already exists: building the future from the now without considering the past. It is this approach of adding layer after layer that explains why there are so many ways of achieving the same thing in many large applications: new methods arrive, old ones stay, nobody thinks about merging them together to simplify the process for everyone.

Project Jenga and strategic inertia

Proper software development is thinking, consideration and design and it is this snap, crackle and pop that is first up against the wall when there’s a tight schedule to be stuck to. An appropriate cliché here is “the last 10% of the project takes 90% of the time”. I’d humbly suggest, though, that the reason it takes 90% of the time is because it is actually 90% of the project. I call it “Project Jenga” – taking bricks from underneath yourself to expand the tower in the short term. One pays the price eventually, regardless of how cosy it feels at the time.

Faced with a wobbling tower, modern software development processes have the unfortunate habit of erecting scaffolding. There are many examples of such scaffolding, one of which is a dramatic mis-use of agile development practices where the big picture is lost to history forever in favour of pushing feature-based development down to an even lower level of bullet points. Jobs get done, things get ticked, burn-down occurs and everyone gets a warm impression that incredible progress is being made when often it’s a case of celebrating each step of the journey to the north pole without realising you departed heading south. Things are getting done, but mostly “just well enough to tick the box” rather than properly and with careful consideration. Oh, and as for the destination, meh, you’ll arrive somewhere and square wheels are better than no wheels at all, right?

History is littered with cases of setting up sentry guns to defend against cats when simply closing the door would have done. Examples such as working through a broad range of over-complicated solutions to a truck stuck under a bridge before a young girl pointed out that they could just let the air out of the tyres. Software development is so jam packed with examples it’s hard to know where to start, but I pick this one: trying to reduce bugs by offering a bounty to QA testers for finding them and to programmers for fixing them, an approach that led to an underground black market in bug trading where the team co-operated to maximise payout.

As humans, we’re really bad at changing direction once we’ve set off: we tend to search for solutions to issues within the problem space in which we’re sat rather than having a good look at that space itself to decide if we need to be there at all. It’s “comfort-zone inertia”: our process, in which we’ve invested so much, must be patched to fix it rather than analysing new process that might actually better suit the issues we’re dealing with. It’s that “well, we’ve come this far…” mentality. There’s also an odd and innovation suppressing comfort to doing things roughly the way that everyone else does them because, well, if it’s good enough for them, then surely it’s good enough for us.

Top-down complexity is bad

Needless to say, to cut what is become an increasingly long story mercifully short, over-complexity kills software. The more code you have the less secure, stable and flexible it becomes. You can’t brush this complexity under the carpet forever because sooner or later you reach the ceiling. Then, massive, costly refactoring is required (or, the more terrifying addition of more scaffolding) that could have been avoided if the job had simply been done in a different way from the outset. Complexity is why modern applications are so brittle and is why we generally have a “one size to fit all” mentality for business software: they’re so fragile that it’s far, far too risky to poke around for too long. It’s like tip-toeing across a minefield in concrete boots – you can step as carefully as you wish, but frankly, sooner or later it’s going to get all explodey.

The modern way out is to blame the user or provide a 10,000 word excuse document, or, “License Agreement”, as we more commonly refer to it. This basically says that should the software escape the computer, re-arrange the contents of your kitchen cupboards, bag up your kittens and throw them off a bridge, then somehow it’s your own fault, not the developer’s.

We take this brittleness and we take software instability for granted and we simply should not have to.

Why?

Because there’s another way.

But first, let’s apply the obvious process improvements that everyone can easily do but often don’t because of process inertia or the flawed belief that it won’t deliver their requirements fast enough, when in fact, not only will it deliver them faster, it will enable delivery of future requirements with increasing ease:

• Write off failed solutions, don’t try and fix them

• Think before doing (look before you leap)

• Treat maintenance of software with the same priority as creating it

But, as we all collectively know, these things are easier to say than they are to do.

Zen and the art of Mother Nature

The most reliable software is the software that you don’t write at all. As zen as this wishy washy statement sounds, few could argue that getting more out of less is always going to be a winner and if there is one truly awesome example of complexity management we can observe, learn from and copy, it’s nature itself.

Nature is amazing. No one cell in any complex living system is in charge yet somehow, complex behaviour emerges from the interactions of many, many hundreds of millions of simple systems. The key word here is emergence: the really interesting stuff, the grey areas, happened not because they were specified but because they were enabled by an underlying model. This is the kind of stuff that we love: it’s features for free! We construct a life-support system for large populations of small things and allow them to combine to solve the problems rather than solving the problems directly. We end up writing less software because we’re not concerned with detail, that works itself out, we’re merely concerned with allowing the solutions to occur.

It’s still complex, but that complexity appeared without being specified. We’ve thrown away the long list of rules as to how the software works and allowed those rules to grow, mature and adapt all by themselves out of a large population of little things that we can easily understand, maintain and manage. Neat, eh?

This biologically inspired, bottom-up development philosophy is — without a doubt, in my opinion — the future of software development. As simulations become more real, as detail levels rise and as the desire for virtual characters that are more believable than cardboard cut-outs become a requirement it will be increasingly obvious that a top-down, rule-based, scripted solution neither works nor scales. Giving up direct control in favour of seeding environments will become the only way of solving such problems.

The really wonderful thing about such approaches is that less doesn’t mean less, less means more, and in many ways: simulations become more realistic, more flexible and more adaptable and less software increases security, stability and adaptability. In short, this means round pegs in round holes: no longer do you need to put up with poorly fitting trousers, now you can have tailor made, perfect fitting ones. You can walk down the street proud and happy knowing that you’re immune to wardrobe malfunctions.

This is clearly a long journey: the fundamental way in which software is developed is not a leap that can be taken in one huge leap, much in the way that you can’t cross a chasm in small steps. You need to build a bridge first. Or have very, very long clown shoes.

But the dream of having software adapt itself to the needs of those using it and being able to customise solutions to perfectly fit any given requirement is simply too much to resist.