collaboration.media is an open platform dedicated to the exploration of the themes of collaboration, creativity and innovation. In-depth articles, commentary, events and podcasts at the state of economy, politics, culture and business in the collaborative society.

A world unexpected: I

Pythia

During the interwar years, the French government decided to pursue major defensive fortifications in order to prevent any future German invasion. The fortifications were enormously costly: the 1930 budget allocated 3.3 billion francs to the project (equivalent to 1.3 trillion USD at its current 2014 rate.)

The French thought they understood the laws of military strategy and reacted accordingly by building a ‘Great Wall of France.’ The wall proved to be completely useless. Today the Maginot line fulfills some utility at last, as a reminder of our prediction failures and our ‘epistemic arrogance,’ that is, our frequent overconfidence in what we are capable of knowing.

Opinions about the future permeate our modern societies. What would the consequences of a war between the U.S. and China be? What future awaits Microsoft in the upcoming years? Will Russia monopolize subarctic oil? It is quite tempting to answer all of these questions. But few if any of us can really know. Our minds did not evolve to properly evaluate probabilities, let alone to predict what lies ahead of us.

We usually hit several serious limitations when it comes to anticipating the consequences of potential future events.

Most of us tend to think that risk is something that can be estimated or calculated when, in fact, many kinds of risks are simply unknowable. Who could have predicted the assassination of the Archduke Franz Ferdinand? Barely anyone besides the plotters. To everyone else, the murder was a complete surprise. It is difficult to speak of such ‘unknowable’ risks because they can only be explained in hindsight.

Then come our psychological biases. Take, for example, our perception of time. We humans were never really made to think in the long term. The further an event is in the future, the more we will tend to minimize its importance. We effectively tend to neglect whatever we might gain or lose in the distant future (social scientists refer to this phenomenon as ‘hyperbolic discounting.’)

Perhaps the greatest mental weakness we have is to mistake regularity for law. Nassim Nicholas Taleb convincingly argues that we have a dangerous tendency to induce law-like generalizations from limited experience. What we do not realize is that there exists absolutely no guarantee that even a thousand-year long trend will last forever. Social ‘realities’ are often far more contingent and fleeting than we think.

Some of these weaknesses can be overcome through slow, careful and deliberate thinking; but our ability to do so is limited. We can aspire to act rationally as much as we can, but to even suppose that we can be completely rational on a permanent basis is naïve.

Taleb’s work seems to favor the idea that we should rather focus on internalizing fast but useful heuristics (‘rules of thumb’) for our daily operations, while applying our rationality in a slow and intentional manner where it matters most. That includes, for instance, creating ‘rational’ institutions capable of counteracting our own ‘irrational’ tendencies.

Chaos

The limits to our understanding of risk are clearly due to our own cognitive limitations, as argued above, but they are also due to the very structure of the world we live in: even if we wanted to and had the required brain power, it would be physically impossible for us to know about everything that is going on everywhere at the same time. Since the social world is too complex for us to understand and accurately predict, we can only understand and relate to it through limited and flawed information.

This limited knowledge of the social world is precisely what makes much of it appear random and unpredictable. If I meet a pregnant woman, I can establish that her baby has a 50% chance of being a girl (or a boy, for that matter). The sex of the baby, which in this case is just as random as a coin toss, is reduced to a probability because of my lack of knowledge.

The events that led to the baby’s formation, just like the events that led to the coin falling on a particular side, are either too complex or too inaccessible for me to understand. However, a doctor with access to ultrasound scans, and thus to privileged information, would obviously not see the answer to the question ‘what is the baby’s sex?’ as random.

In the end, the only difference between me and the doctor is access to information. We thus see that randomness has a subjective dimension to it. Nothing is inherently random. It is only what we cannot know and what we cannot control that effectively appears random to us. The same is true of financial crises, wars, politics and environmental change. Events caused by such complex processes only become intelligible after the fact.

Taleb’s work thereby provides a highly useful prism through which future environmental change can be apprehended. As Taleb himself notes, pollution tends to be harmful in a nonlinear way: a little bit of it is devoid of harm, while a lot of it can cause major disturbances.

Recognizing this implies that “one does not necessarily need to believe in anthropogenic climate change in order to be ecologically conservative.” In an opaque world that we cannot fully understand and predict, the mere possibility of environmental risk suffices to justify environmental safeguards.