Slate on ‘Decentralized Intelligence’

I found this a captivating read, though it’s about risk management and problem solving, not decentralization of power per se. The vivid lessons excerpted in the full entry are a reminder that complete risk analysis is impossible, so the only sure strategy is containment.

When organizations fail, our first reaction is typically to fall into “control mode”: One person, or at most a small, coherent group of people, should decide what the current goals of the organization are, and everyone else should then efficiently and effectively execute those goals. Intuitively, control mode sounds like nothing so much as common sense. It fits perfectly with our deeply rooted notions of cause and effect (“I order, you deliver”), so it feels good philosophically. It also satisfies our desire to have someone made accountable for everything that happens, so it feels good morally as well.

But when a failure is one of imagination, creativity, or coordination—all major shortcomings of the various intelligence branches in recent years—introducing additional control, whether by tightening protocols or adding new layers of oversight, can serve only to make the problem worse…

In 1997, the Toyota group suffered what seemed like a catastrophic failure in its production system when a key factory—the sole source of a particular kind of valve essential to the braking systems of all Toyota vehicles—burned to the ground overnight…

How does one rapidly regenerate large quantities of a complex component, in several different varieties, without any specialized tools, gauges, and manufacturing lines (almost all of which were lost), with barely any relevant experience (the company that made them was highly specialized), with very little direction from the original company (which was quickly overwhelmed), and without compromising any of their other production tasks?…

the response was a bewildering display of truly decentralized problem solving: More than 200 companies reorganized themselves and each other to develop at least six entirely different production processes, each using different tools, different engineering approaches, and different organizational arrangements. Virtually every aspect of the recovery effort had to be designed and executed on the fly, with engineers and managers sharing their successes and failures alike across departmental boundaries, and even between firms that in normal times would be direct competitors.

Within three days, production of the critical valves was in full swing, and within a week, production levels had regained their pre-disaster levels. The kind of coordination this activity required had not been consciously designed, nor could it have been developed in the drastically short time frame required. The surprising fact was that it was already there, lying dormant in the network of informal relations that had been built up between the firms through years of cooperation and information sharing over routine problem-solving tasks. No one could have predicted precisely how this network would come in handy for this particular problem, but they didn’t need to—by giving individual workers fast access to information and resources as they discovered their need for them, the network did its job anyway.

Perhaps the most striking example of informal knowledge helping to solve what would appear to be a purely technical problem occurred in a particular company that lost all its personnel associated with maintaining its data storage systems. The data itself had been preserved in remote backup servers but could not be retrieved because not one person who knew the passwords had survived. The solution to this potentially devastating (and completely unforeseeable) combination of circumstances was astonishing, not because it required any technical wizardry or imposing leadership, but because it did not. To access the database, a group of the remaining employees gathered together, and in what must have been an unbearably wrenching session, recalled everything they knew about their colleagues: the names of their children; where they went on holidays; what foods they liked; even their personal idiosyncrasies. And they managed to guess the passwords. The knowledge of seemingly trivial factoids about a co-worker, gleaned from company picnics or around the water cooler, is not the sort of data one can feed into a risk-management algorithm, or even collate into a database—in fact, it is so banal that no one would have thought to record it, even if they could. Yet it turned out to be the most critical component in that firm’s stunning return to trading only three days after the towers fell.

So, how does one make this kind of magic happen? Unfortunately, no one is quite sure.