When Murray Gell-Mann was asked how Richard Feynman managed to solve so many hard problems Gell-Mann responded that Feynman had an algorithm:

Write down the problem.

Think real hard.

Write down the solution.

Gell-Mann was trying to explain that Feynman was a different kind of problem solver and there were no insights to be gained from studying his methods. I kinda feel the same way about managing complexity in medium/large software projects. The people that are good are just inherently good at it and somehow manage to layer and stack various abstractions to make the whole thing manageable without introducing any extraneous cruft.

So is the Feynman algorithm the only way to manage accidental complexity or are there actual methods that software engineers can consistently apply to tame accidental complexity?

I wouldn't be surprised if the act of writing down the problem (and fleshing it out so you could adequately explain it to someone else) helped you identify a solution.
–
Rory HunterFeb 17 '14 at 12:02

@RoryHunter - Agreed. And part of writing down the problem and sharing with someone indicates you admit you don't have a solution yet.
–
JeffOFeb 17 '14 at 13:28

35

@RoryHunter: This. Almost weekly, I come across a problem that I can't solve, write an email to someone to explain it. Then realise what it is that I'm not considering, solve the problem and delete the email. I've also written about a dozen questions on this site that never got sent.
–
pdrFeb 17 '14 at 13:43

The most fundamental thing I have learned is do not ask a developer to handle a problem that is just within their reach. While those problems are exciting, they also involve the developer stretching to unfamiliar places, leading to much "on the fly" growth. Some tools, such as spiral development, are good for grounding development teams in a small tractable problem before growing it into a final solution
–
Cort AmmonFeb 18 '14 at 1:31

2

@CortAmmon Not to be mean but that sounds like a pretty dumb insight. 99% of what developers know was learned at some point through your so-called 'on-the-fly growth'. It takes a good problem solver to make a good programmer. Solving problems is something we're inherently drawn to. If your developers are't growing they're probably doing a lot of boring repetitive work. The type of work that will make any reasonably talented developers unhappy and depressed. And... 'Spiral Development' is nothing but a re-hashing of the basic concept of iterative development with waterfall milestones.
–
Evan PlaiceFeb 18 '14 at 19:20

6 Answers
6

When you see a good move, look for a better one.
—Emanuel Lasker, 27-year world chess champion

In my experience, the biggest driver of accidental complexity is programmers sticking with the first draft, just because it happens to work. This is something we can learn from our English composition classes. They build in time to go through several drafts in their assignments, incorporating teacher feedback. Programming classes, for some reason, don't.

There are books full of concrete and objective ways to recognize, articulate, and fix suboptimal code: Clean Code, Working Effectively with Legacy Code, and many others. Many programmers are familiar with these techniques, but don't always take the time to apply them. They are perfectly capable of reducing accidental complexity, they just haven't made it a habit to try.

Part of the problem is we don't often see the intermediate complexity of other people's code, unless it has gone through peer review at an early stage. Clean code looks like it was easy to write, when in fact it usually involves several drafts. You write the best way that comes into your head at first, notice unnecessary complexities it introduces, then "look for a better move" and refactor to remove those complexities. Then you keep on "looking for a better move" until you are unable to find one.

However, you don't put the code out for review until after all that churn, so externally it looks like it may as well have been a Feynman-like process. You have a tendency to think you can't do it all one chunk like that, so you don't bother trying, but the truth is the author of that beautifully simple code you just read usually can't write it all in one chunk like that either, or if they can, it's only because they have experience writing similar code many times before, and can now see the pattern without the intermediate stages. Either way, you can't avoid the drafts.

"Software architecture skill cannot be taught" is a widespread fallacy.

It is easy to understand why many people believe it (those who are good at it want to believe they're mystically special, and those who aren't want to believe that it's not their fault that they're aren't.) It is nevertheless wrong; the skill is just somewhat more practice-intensive than other software skills (e.g. understanding loops, dealing with pointers etc.)

I firmly believe that constructing large systems is susceptible to repeated practice and learning from experience in the same way that becoming a great musician or public speaker is: a minimum amount of talent is a precondition, but it's not a depressingly huge minimum that is out of reach of most practitioners.

Dealing with complexity is a skill you acquire largely by trying and failing a few times. It's just that the many general guidelines that the community has discovered for programming in the large (use layers, fight duplication wherever it rears its head, adhere religiously to 0/1/infinity...) are not as obviously correct and necessary to a beginner until they actually do program something that is large. Until you have actually been bitten by duplication that caused problems only months later, you simply cannot 'get' the importance of such principles.

I really like your last sentence. I'm positive I learned so much at my first job because I was there long enough for my mistakes to catch up with me. It's a valuable experience.
–
MetaFightFeb 17 '14 at 12:17

I don't understand how you can assert that the inability to teach software architecture is a fallacy, but go on to say that repeated practice, learning from experience, and making mistakes (coupled with some innate talent) is the only way to learn it. My definition of something that can be taught is something that you don't need to practice intensively to get, but that you can learn by watching, listening, and reading. I agree with everything in this answer, except the first line, since it is apparently contradicting the rest.
–
Thomas Owens♦Feb 17 '14 at 13:46

3

The point is that a lot of people claim that many people absolutely, under any circumstances cannot learn to be good architects (whether in a lecture room or in the industry), because they "haven't got what it takes". That's what I consider common but false.
–
Kilian FothFeb 17 '14 at 13:48

4

@Thomas:Getting back to the public speaking analogy. No matter how many books you read, how many hours you spend studying the topic or how many teachers try to teach you to be good at public speaking, you can't get good at it unless you do it through repeated practice, learning from experience and making mistakes. You'll never convince me that someone can learn the skill simply by watching, listening and reading. Same goes for most any skill, including software architecture. I actually can't think of any skill of any substance that you can learn simply by watching, listening and reading.
–
DunkFeb 17 '14 at 16:00

Pragmatic thinking by Andy Hunt addresses this issue. It refers to the Dreyfus model, according to which there are 5 stages of proficiency in various skills. The novices (stage 1) need precise instructions to be able to do something correctly. Experts (stage 5), on the contrary, can apply general patterns to a given problem. Citing the book,

It’s often difficult for experts to explain their actions to a fine level of detail; many
of their responses are so well practiced that they become preconscious actions. Their vast experience is mined by nonverbal, preconscious areas of the brain, which makes it hard for us to observe and hard for them to articulate.

When experts do their thing, it appears almost magical to the rest of us—strange incantations, insight that seems to appear out of nowhere, and a seemingly uncanny ability to know the right answer when the rest of us aren’t even all that sure about the question.
It’s not magic, of course, but the way that experts perceive the world, how they problem solve, the mental models they use, and so on, are all markedly different from nonexperts.

This general rule of seeing (and as a result avoiding) different issues can be applied to specifically the issue of accidental complexity. Having a given set of rules isn't enough to avoid this problem. There will always be a situation which isn't covered by those rules. We need to gain experience to be able to foresee problems or identify solutions. Experience is something that cannot be taught, it can only be gained by constant trying, failing or succeeding and learning from mistakes.

This question from Workplace is relevant and IMHO would be interesting to read in this context.

That is like saying that it isn't magic how the "elite" athletes are able to do what they do, it is just a matter of them being able to naturally perform the discrete and logical steps necessary to perform at the highest levels. So if only those athletes could articulate those discrete and logical steps to us, then we could all be elite athletes. The concept that we could all be elite athletes, no matter what level of knowledge we obtain, is just as ridiculous as we could all be experts at whatever skill we are trying to learn, regardless of aptitude in that skill area.
–
DunkFeb 17 '14 at 16:14

1

@Dunk, Magic would be when those "elite" athletes could perform the same without any trainings at all. The main idea is that there is no silver bullet. No matter how talented one is, experience cannot be gained just by studying some "discrete and logical steps". By the way, according to the same book, only 1-5% of people are experts.
–
superMFeb 17 '14 at 17:01

@Super:I would question any book/study that made such a ridiculous claim as only 1-5% of people are experts. Talk about pulling a number out of their #&#&$#. Experts at what? I bet there is a much higher percentage of people that are experts at breathing, walking, eating. Who decides what is expert level? A claim like the 1-5% discredits any further claims and analysis by such authors.
–
DunkFeb 19 '14 at 15:10

You don’t spell it out, but “accidental complexity” is defined as complexity that is not inherent to the problem, as compared to “essential” complexity. The techniques requireed for "Taming" will depend on where you start from. The following refers mostly to systems that have already acquired unnecessary complexity.

I have experience in a number of large multi-year projects were the “accidental” component significantly outweighed the “essential” aspect, and also those where it did not.

Actually, the Feynman algorithm applies to some extent, but that does not mean that “think real hard” means only magic that cannot be codified.

I find there are two approaches that need to be taken. Take them both – they are not alternatives. One is to address it piecemeal and the other is to do a major rework.
So certainly, “write down the problem”. This might take the form of an audit of the system – the code modules, their state (smell, level of automated testing, how many staff claim to understand it), the overall architecture (there is one, even if it “has issues”), state of requirements, etc. etc.

It’s the nature of “accidental” complexity that there is no one problem that just needs addressed. So you need to triage. Where does it hurt – in terms of ability to maintain the system and progress its development? Maybe some code is really smelly, but is not top priority and fixing can be made to wait. On the other hand, there may be some code that will rapidly return time spent refactoring.

Define a plan for what a better architecture will be and try to make sure new work conforms to that plan – this is the incremental approach.

Also, articulate the cost of the problems and use that to build a business case to justify a refactor. The key thing here is that a well architected system may be much more robust and testable resulting in a much shorter time (cost and schedule) to implement change – this has real value.

A major rework does come in the “think real hard” category – you need to get it right. This is where having a "Feynman" (well, a small fraction of one would be fine) does pay off hugely. A major rework that does not result in a better architecture can be a disaster. Full system rewrites are notorious for this.

Implicit in any approach is knowing how to distinguish “accidental” from “essential” – which is to say you need to have a great architect (or team of architects) who really understands the system and its purpose.

Having said all that, the key thing for me is automated testing. If you have enough of it, your system is under control. If you don't . . .

Could you explain how automated testing serves to differentiate accidental and essential complexity?
–
ryasclFeb 19 '14 at 1:55

1

@RyanSmith. In short, No. In fact, I don't think there is any particular way (other than "think hard") to distinguish these. But the question is about "managing" it. If you view requirements, design, test cases as part of the system architecture, then lack of automated tests is in itself accidental complexity, so adding automated testing where it is lacking does help address it and make what there is more manageable. But most definitely it does not resolve all of it.
–
KeithFeb 19 '14 at 2:03

Accidental Complexity

The original question (paraphrased) was:

How do architects manage accidental complexity in software projects?

Accidental complexity arises when those with direction over a project choose to append technologies that are one off, and that the overall strategy of the project's original architects did not intend to bring into the project. For this reason it is important to record the reasoning behind the choice in strategy.

Accidental complexity can be staved off by leadership that sticks to their original strategy until such time as a deliberate departure from that strategy becomes apparently necessary.

Avoiding Unnecessary Complexity

Based on the body of the question, I would rephrase it like this:

How do architects manage complexity in software projects?

This rephrasing is more apropos to the body of the question, where the Feynman algorithm was then brought in, providing context that proposes that for the best architects, when faced with a problem, have a gestalt from which they then skilfully construct a solution, and that the rest of us can not hope to learn this. Having a gestalt of understanding depends on the intelligence of the subject, and their willingness to learn the features of the architectural options that could be within their scope.

The process of planning for the project would use the learning of the organization to make a list of the requirements of the project, and then attempt to construct a list of all possible options, and then reconcile the options with the requirements. The expert's gestalt allows him to do this quickly, and perhaps with little evident work, making it appear to come easily to him.

I submit to you that it comes to him because of his preparation. To have the expert's gestalt requires familiarity with all of your options, and the foresight to provide a straightforward solution that allows for the foreseen future needs that it is determined the project should provide for, as well as the flexibility to adapt to the changing needs of the project. Feynman's preparation was that he had a deep understanding of various approaches in both theoretical and applied mathematics and physics. He was innately curious, and bright enough to make sense of the things he discovered about the natural world around him.

The expert technology architect will have a similar curiosity, drawing on a deep understanding of fundamentals as well as a broad exposure to a great diversity of technologies. He (or she) will have the wisdom to draw upon the strategies that have been successful across domains (such as Principles of Unix Programming) and those that apply to specific domains (such as design patterns and style guides). He may not be intimately knowledgeable of every resource, but he will know where to find the resource.

Building the Solution

This level of knowledge, understanding, and wisdom, can be drawn from experience and education, but requires intelligence and mental activity to put together a gestalt strategic solution that works together in a way that avoids accidental and unnecessary complexity. It requires the expert to put these fundamentals together; these were the knowledge workers that Drucker foresaw when first coined the term.

Back to the specific final questions:

Specific methods to tame accidental complexity can be found in the following sorts of sources.

Following the Principles of Unix Programming will have you creating simple modular programs that work well and are robust with common interfaces. Following Design Patterns will help you construct complex algorithms that are no more complex than necessary. Following Style Guides will ensure your code is readable, maintainable, and optimal for the language in which your code is written. Experts will have internalized many of the principles found in these resources, and will be able to put them together in a cohesive seamless fashion.

Write an integration test that fails because the feature is not there. Review with QA, or lead engineer, if there is such thing in your team.

Write unit tests for some classes that could pass the integration test.

Write the minimal implementation for those classes that passes the unit tests.

Review unit tests and implementation with a fellow developer. Go to Step 3.

The whole design magic would be on Step 3: how do you set up those classes? This turns to be the same question as: how do you imagine that you have a solution for your problem before you have a solution to your problem?

On the other hand, not everyone has the same "taste for imagined solutions": there are solutions that only you find elegant, and there are others more understandable by a wider audience. That is why you need to peer-review your code with fellow developers: not so much to tune performance, but to agree on understood solutions. Usually this leads to a re-design and, after some iterations, to a much better code.

If you stick with writing minimal implementations to pass your tests, and write tests that are understood by many people, you should end with a code base where only irreducible complexity remains.