Futures Impossible : a new methodology to study world events

The study of the future, as a scientific and intellectual endeavor, used to be driven by the careful extrapolation of trends, as in Herman Kahn’s Year 2000, or the forecasting of complex interaction among many variables, as in the Club of Rome’s Limits to Growth and Paul Ehrlich’s Population Bomb. The technologies behind these studies relied on the mathematical tools of operations research developed during World War Two and on methods for the aggregation of expert opinion such as the Delphi Technique, developed at Rand and the Institute for the Future.

The scenarios and forecasts built on this technical base were supplemented by the study of a few extreme hypothetical situations known as “wild cards” or “black swans” (major earthquake in Tokyo, terrorist attack in New York, asteroid strike in Western Europe) designed to stretch the borders of the crisis management maps and to stimulate our collective thought process—while remaining within the domain of the Possible.

Such techniques for describing the future and anticipating its opportunities and dangers have largely become obsolete because of the acceleration of technology itself and the increasing vulnerability of our society to chaotic processes that are not well behaved under most classic models.

In the world of the 21st century, the situations faced by decision-makers in government and industry are of a wholly different nature. In an economic environment where General Motors could go bankrupt in one week, and Lehman Brothers in one afternoon, the extrapolation of trends and the wisdom of experts are still relevant, but a new methodology is needed to deal with unforeseen discontinuities. Neither of the above catastrophes was a “wild card” in anyone’s scenario. No classical futurist could imagine such discontinuities because the tools to anticipate and describe them were not available: they were truly “impossible,” just as the Fukushima nuclear disaster was deemed “impossible” by the General Electric experts who built the plant and the Japanese authorities who managed it. Similarly, as a society, we seem to be incapable of imagining healthy, positive “impossibilities” such as reconciliation in Palestine, an end to terrorism, or a world without starvation.

At the Institute for the Future, a team headed up by Bob Johansen, Kathi Vian and myself has begun to develop a typology of Impossible Futures, starting from four classes of events:

A. Some futures are deemed impossible because they would require an extraordinary convergence of several scenarios, each of which has very low probability. The bankruptcy of General Motors (Fortune One!) in one week is a case in point.

B. Some futures are deemed impossible because they would require the convergence of several scenarios on time scales that violate our knowledge of reality. The failure of the Madoff funds, for example, was deemed impossible by his investors, all of whom were successful financial experts. It happened because two low-probability events converged: (1) regulatory authorities repeatedly refused to act every time the illegal scheme was brought to their attention, and (2) the subprime crisis dried up sources of funds overnight, exposing the fraudulent structure.

C. Some futures are deemed impossible because they would require the convergence of several scenarios, including forces or components that do not exist within accepted knowledge. In A.E.Van Vogt’s novel The World of null-A (for non-Aristotelian), a secret agent named Gosseyn is repeatedly assassinated. Each time, he is reincarnated in a new body held in reserve by his masters in special sarcophagi, endowed with increased abilities. A future when Gosseyn could exist lies outside the natural limits of our scientific knowledge and culture.

D. There are futures that are deemed impossible because we simply cannot imagine them. In Saddam Hussein’s culture there was no scenario in which U.S. forces could see the movement of his forces even at night, through clouds or through dust storms. Most nations still have no concept for devices that could detect underground cavities invisible from the air or from space. Even in modern American culture, the fact that remote classified facilities can be detected, visited, and accurately described by mental powers alone remains beyond accepted concepts.

To a decision-maker in business or government, simply describing such impossible future scenarios is not helpful in the absence of a methodology for detecting, understanding, and mitigating their practical effects. What is needed is a deeper grid that can be used as an overlay to highlight radical discontinuities in technology, geopolitics, social behavior or economic patterns. We believe that such a tool needs to be developed if we want to survive the new realities where worldviews collide at an accelerated pace.

46 Responses to “Futures Impossible : a new methodology to study world events”

It’ll take quite a bit more evidence before I accept a statement like “remote classified facilities can be detected, visited, and accurately described by mental powers alone” as anything but sheer woo.

I’ll try to remain open-minded about whether or not I’m actually just closed-minded.

People in the comment thread dismissing remote viewing are making the same mistake as the OP was pointing out. It doesn’t matter whether or not RV is possible: it matters whether or not we consider and prepare for events that currently seem impossible. RV is an excellent example, since it’s something that appears to be impossible, but that if possible would have the capacity to totally fuck with all of our existing intelligence systems. Even if RV ends up continuing to be understood as impossible indefinitely, it is entirely useful to contemplate what kinds of mechanisms would be necessary to circumvent some arbitrary but well-defined model of RV, both as practice in planning for the fallout of other impossible things and in case something comes along that has many of the strengths and weaknesses of the fictional conception of RV.

Not accepting something as a proven fact is not the same thing as categorically dismissing the concept.Nevertheless I don’t plan to spend much time devising a counter response to the possibility that unicorns with tentacles could manipulate the price of gold from Alpha Centuari.

No. It’s not useful at all to spend time and effort on planning countermeasures for something which is, and will remain, fictional. Planning for category D futures is, anyway, a massive waste of time because (by definition) they are unimaginable. The example given (remote viewing) is purely fictional, not an unimaginable future. The fact that it can be written down makes it imaginable (by definition), but it doesn’t make it plausible.There’s a huge confusion in the above article between genuine black swan events, and bad risk management. Fukushima wasn’t a black swan: it was a disaster waiting to happen. Frankly, treating it as a black swan is actually to let the Japanese nuclear regulatory organizations, the Japanese government, TEPCO and GE too easily off the hook. If Fukushima had been built in a region where earthquakes and tsunamis had never been known, *then* it would have been a black swan event. The fact is it was badly designed, badly sited and badly run. And several people had tried to point this out over time, so it’s not as if it was exactly a surprise. Given this, there was no mystery as to what happened next when nature did what nature does from time to time in Japan.

“True” black swan events are not unpredictable, just unpredicted by the subject cohort, according to the Taleb book. You are still right that Fukishima doesn’t even fall into that more open definition though.

I propose there should be a new name for high-impact, low-frequency risks that are purposefully ignored and discredited by those with an economic incentive precisely to ignore the externality of said risk.

What we call here a Black Swan (and capitalize it) is an event with the following three attributes. First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.I stop and summarize the triplet: rarity, extreme impact, and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives.

Well put Ohno. It doesn’t matter if any of these “impossible” events come to pass or not. I see the author recognizing the difficult state of future planning and trying to build a new foundation. Hopefully future future thinkers will take this or similar premises, evaluate their validity and either assimilate or discard them. In this way the author has done a great service.

Well put @google-f421b28a24f33ff21a6b49e3c07c2182:disqus . It doesn’t matter whether any of the “impossible” things mentioned in the article come to pass. The reality is that we as a species are encountering many events that we thought at one time were impossible. It can’t be bad to begin to develop a way of talking about things we can’t predict. Science can really move forward when someone lays out the groundwork to describe a new set of ideas. Future future thinkers can assimilate and then augment or discard these ideas.

About Fukishima not being a black swan; can we really say it was poor planning and bad management? It seems to me to be a case of experts with incredible resources doing the best they can to eliminate or mitigate all threats and then being surprised about something “impossible” that they couldn’t plan for. If Fukishima isn’t a black swan, how about 9/11? Global warming? The Titanic? The breaking of the curse of the Bambino by Boston in 2004? In retrospect, the evidence was there for all these but something about our worldview/evidence gathering/smarts as a species wasn’t up to snuff and they were impossible. I look forward to seeing where others take this line of reasoning.

P.S. In reference to another comment in this thread, has RV been “proven” to be impossible? I don’t “Person A remote viewed Situation B” is a disprovable statement.

Once again, Fukushima was an avoidable disaster. From the Wikipedia page on regulatory capture, Japanese examples [http://en.wikipedia.org/wiki/Regulatory_capture#Japanese_examples]:

Despite warnings about its safety, Japanese regulators from the Nuclear and Industrial Safety Agency approved a 10-year extension for the oldest of the sixreactors at Fukushima Daiichi just one month before a 9.0 magnitude earthquake and subsequent tsunami damaged reactors and caused a meltdown.

What makes Fukushima *not* a black swan (and, conversely, an avoidable disaster due to human greed and incompetence) is the fact that some people *did* prospectively predict that there was a problem. That ineffective (and possibly corrupt) regulatory agencies did nothing to heed that advice does not take away the fact that *some people* identified problems with the site, thus breaking the third requirement for a black swan – because if their advice had been heeded in time, the accident would not have happened (and it was always going to be the case that there was going to be a large earthquake and tsunami).

And from the Wikipedia entry on remote viewing [http://en.wikipedia.org/wiki/Remote_viewing]:

For the largest paranormal research institution, the James Randi Educational Foundation, out of all of the applicants who applied for the One Million Dollar Paranormal Challenge, nobody has even passed the preliminary tests.

Good point, John! If someone were to come up with a technology that effectively delivered the benefits of remote viewing (without the woo), then any scenarios that accounted for RV would have direct applicability.

Seems too far fetched? This “ancient” (2008) web article points at delivering images directly from the brain to a computer screen, something that was considered science fiction (bad science fiction, at that) in the not-too-distant 1980s: http://www.physorg.com/news148193433.html

The point is, developing contingencies for things that seem to be in the realm of fantasy _now_ is exactly part of the futurist’s job. We, as a species, have an interesting tendency to make fantastic things _real_ in one way or another… so long as we don’t get too comfortable with our assumptions.

I think the example chosen for D is just fine… it is literally impossible to give an actual example of something unimaginable, obviously, so the closest one can come to it is to take a known example that would evoke exactly the “that’s impossible” reaction we see in these comments.

The problem is not that we see too few impossible futures, but that we see too many. Should we be evaluating the impact of a catastrophic meltdown at Chooz due to unresolved thermal fatigue issues? A transport ship bombing that renders the Panama Canal beyond repair? The invention of an efficient, light, and compact electricity storage technology? And this is just off the top of my head in 30 seconds…

We have stuck for simple extrapolation for so long not for a lack of imagination, but because extrapolation only offers a very narrow range of events for which to prepare. Granted this is a bit like looking for your wallet under a street lamp (rather than where you lost it) because there is more light there, but take extrapolation away, and we would be overwhelmed by the number of possibilities for which we can prepare.

The author may have a valid point here, but I’m not able to evaluate it properly for two reasons, one superficial and one which may be a deeper misunderstanding of the premise (on my part).

I do think remote viewing falls more reasonably into C than D and the inclusion in D seems odd enough to distract me and even make me question the rest of the thinking that went into the idea.

My second issue is more with the idea itself as I understand it.

Image two intelligent actors in an environment with considerable risk and limited resourced. The author seems to be suggesting that an actor who weighs events only by their likelihood and impact when allocating resources in preparation will fare worse than an actor who adds an arbitrary, increased weight to extremely unlikely occurrences and spends resources preparing for those. That seems fundamentally flawed and, in fact, sounds like exactly the approach that led to most of the outcomes described in the same article.

It was no great stretch of the imagination (or probability) to assume that mismanagement and unscrupulous activity would follow financial deregulation. In fact that falls squarely in @jjsaul:disqus ‘s colorful bird category (I suggest brown duck).

The same for the reactor and the same for the technological weaknesses of the Iraqi army. None of those were particularly unlikely. No one predicted the coalition would be technologically inferior to Hussein’s forces. The arguments were that they were numerous and had quite a bit of combat experience. It turned out that didn’t count for as much as good logistics and technology in that particular conflict.

An example of applying the author’s suggestions might be seen in the “War on Terror” where a vast percentage of the resources of the wealthiest civilization in history (I’m talking worldwide, but the US led this charge) have been spent trying to fight the threat of lunatics setting their shoes or underwear on fire, and trying to stop teenagers from ingesting things that might make them giggly and potentially subversive. All this while underfunding disaster preparations for likely occurrences like floods, tornadoes, hurricanes, fires, drought, famine and disease outbreaks.

Now that I think it through, this alternative, the cousin to the the brown duck, should be called a wild goose.

The problem with your allocation of resources scenario is that you’re not taking into account a) spillover benefits or b) declining confidence in estimates of probability. The whole thrust, as I understood it, is that the confidence of weights derived from parsimonious inductive reasoning about future states of affairs or knowledge has declined precipitously with the increasing complexity, volume, and rate of change in available information in the contemporary era. If remote viewing made this all seem a bit woo-woo, consider how similar this is to Popper’s critique of historicism (certainly a sober and skeptical thinker).

The point is that this entails an allocation of analytical resources more broadly distributed across the probability spectrum (which includes remote viewing and reincarnation, unless we granted current scientific models sacred doctrinal status while I wasn’t looking), even more so because of the spillover and reinforcement benefits of training to immediately, effectively assimilate and respond to major, unforeseen disruptive events. The ‘call to theorize’ being put forth is a statement to the effect that we need statistical and epistemological methods that are more robust and useful outside the center peak of high-comprehension, high-predictability outcomes.

Rough, but maybe useful analogies I’d draw would be to paraconsistent logics and the thermodynamics of systems far from equilibrium.

I like your argument, and if we restrict the models to only influencing where we place our bets for outliers, I think I can agree it’s a reasonable and useful effort. So spending some discretionary resources on thought experiments on counter measures against remote viewing or security leaks by the reincarnated might be worthwhile even if only for, as you mentioned, the spillover benefits. I’ll also grant that we can be sure our scientific models will change, sometimes in unexpected directions, but we will still be right, more often than not, by presuming that unlikely things are unlikely.

If we imagine a new model which takes into account how to allocate these outlier bets, haven’t we just moved the boundary of what we think is likely? We are simply saying that based on a more sophisticated model, those outliers are more likely than the much larger set of outliers upon which we choose not to bet. Aren’t we?

I’m not a philosopher or mathematician, so perhaps I’m just putting too much stock in the analogies without understanding the complexities of the underlying ideas.

That’s a good point; a method like this is definitely concerned with extending and refining the map of outlier events/observations that we can effectively analyze for their likelihood, implications, optimal response, and so forth. Further than that, though, it’s also being proposed that this isn’t just a technique for producing marginal gains in the efficiency of risk allocation; rather, it’s an urgently necessary reaction to an inflationary burst in the growth and complexity of new data, novel phenomena, and changing models. Under those circumstances, the probability distribution across outcome-space is increasingly less confident, because we have access to a diminishing portion of relevant data, that we analyze with models that are increasingly likely to be inaccurate or wholly irrelevant at an arbitrary point in the future.

So that means that we should expect (and I would argue are certainly experiencing) a growing incidence of events that would have been rationally rated low-probability, or not even considered, a given amount of time previously. The problem is that as our probability distribution flattens and spreads out, our overall allocation efficiency suffers, because our analytical tools are best at the center, where our models are reliable and our information complete. We don’t have the requisite methods or concepts available to usefully evaluate, compare, and map many heterogeneous unlikely events.

It sounds somewhere between dry and alarmist, but I really find it pretty inspiring. In global practice and in theory, we’re pushing the horizons into a faster, unlikelier world, where our understanding of the possible and the real transforms to fit a larger, more uncertain and chaotic domain.

And is something like Madoff a black swan? It was, in fact, a conspiracy. Are those black swans? From what I’ve read, the data about Madoff was in the open – enough of it for at least some to attempt at ringing an alarm bell.

There is a excellent lecture by Andrew Lo and associated paper he did with physicist Mark T. Mueller that discusses this (among other things) from an economic point of view. Their “Taxonomy of Uncertainty” is the best example I have seen of trying to add structure to the risk and uncertainty of the world. You can view the lecture at http://mitworld.mit.edu/video/794/. I think it should be required watching and/or reading for anyone in the finance industry. It would add to the work mentioned above.

My main objection is to the sort of muddled use of probability and possibility. The inductive probability of particular outcomes, given a heretofore-reliable model of the underlying system or trends, is an at least partially different dimension of variation than the ‘deductive’ probability of outcomes that would force some degree of ad-hoc alteration to models that cannot accommodate their occurrence. Generally, it seems to me that the first step to developing a proper methodology of this kind would consist of tossing out the archaic assumption that single general definitions of broad concepts like ‘possibility’ and ‘probability’ are adequate to capture the analytical complexity of their fields.

Unfortunately, this article seems to be perpetuating a common but fundamental misunderstanding regarding foresight: that foresight is the same as prediction, or should be. A failure to predict a specific event is not a failure of foresight; only a failure to build resiliency that can cope with specific events is a failure of foresight. Prediction was never really possible, and in recent times it’s become more obvious that it isn’t. Some foresight practitioners have over the years dreamed of improving our ability to predict events, but as soon as one leaves the realm of trends and cyclic processes, it quickly becomes impossible. What foresight practitioners do is attempt to increase the resiliency of organizations like governments, meaning that when the inevitably unpredictable happens, we’re not left unable to cope. There are many different methods and techniques for achieving this, but they all share the common understanding that the game is not about prediction. What this means is that building for resilience actually *is* the “deeper grid” that the authors talk about–and the only possible one in a world of radical unpredictability.

No one is going to be as successful at developing new methods (or methodologies or w/e) for predicting, detecting, and mitigating risks as the leader who is willing to spend at least twice as much of their available resources than the person in their organization with the greatest interest in short-term profits and corner cutting wants to allow towards risk management via classical techniques.

It’s not that people are willy-nilly disregarding potential scenarios as impossible, it’s that they’re willing to disregard as many scenarios as they think they can get away with. Come up with a technique that’s twice as good at accurately predicting and mitigating risk for the same cost, and they’ll simply spend half as much money on it…at best. And that’s my prediction.

Wow… my major problem with this article has very little to do with considering the “impossible,” or creating new methods for more complex environments. Rather, that the lead-in makes a slew of inaccurate statements about the work of futurist tools, such as “Such techniques for describing the future and anticipating its opportunities and dangers have largely become obsolete because of the acceleration of technology itself and the increasing vulnerability of our society to chaotic processes that are not well behaved under most classic models.” The examples given in the article weren’t wild cards or black swans in anyone’s scenario? I can’t begin to tell you how many times I have heard scenarios in the past for building too close to to barrier lands (close to water – New Orleans anyone, not to mention a nuclear power plant), how robotics and radar/ultrasound would be used to detect enemy fortresses, or how a real estate and corporate meltdown was imminent not long before it occurred. Granted, maybe those in charge of implementing these ideas did not listen, but to say that the methods no longer work? In reality, it’s in the hands of the futurist(s) producing the scenarios as to how immersive and thought-provoking the pattern and sense-making is that gets created by these tools. Don’t get me wrong, I’m all for better and more innovative methods, and I look forward to this one (that is the point – a new method is on the way – right?) But if we want to look deeper into social, organizational, and global complexities, you failed to mention incredible tools for this kind of “overlay” such as Causal Layered Analysis, VERGE Ethnographic Scanning, Panarchy mapping (large-scale adaptive system mapping which points out that different layers of social, technological, and political life do not move at one large complex speed), or Spiral Dynamics, just to name a few (the last of which has been used widely to point toward reconciliation in Palestine, no less).

I truly love the work of IFTF, but some very broad strokes were painted here.

I think the D example of remote viewing is an attempt at putting the improved airborne and orbital sensory systems into context. Remember, recent sats have found cities in the amazon, rivers in Sahara, and may well be able to read the layout — and perhaps content — of a building if the roof is not properly shielded.

apologies, but I don’t see a method or methodology here. Correct me if I’m wrong, but this is just 4 categories to place impossible futures in. In the article you call it a typology, which I think is more accurate, or maybe a taxonomy. If there is actually a method, would someone point it out to me. I’d be very interested.

Folks have so much investment in keeping possibility tied to their personal world view. While I’m more skeptical than your average woo-woo bunny, I’ve been close enough to the work of Jacques, Dr. Ed May, Dr. Jeff Kripal, and others to have ascertained that at the very least, weird shit happens all the time that just won’t get stuffed into these small boxes some folks carry.

Interesting approach and one worth further study. The value of developing scenarios for possible as well as impossible events is to build intellectual muscle. In fact the more “impossible” an event the better it serves as a builder of these forecasting skills. The process of considering the impossible serves as a way to avoid “breathing our own fumes”, i.e., becoming so self satisfied that we stop accepting new and potentially upsetting information. From my own experience, when I dismiss ideas out of hand, ridicule them and refuse to even discuss the ideas, I realize that I am very far from scientific or rational. In fact when I behave this way I have learned, as an antidote, to ask the question,”what am I afraid of and why”?

A passive/driven listening device invented by Leon Theremin, Soviet inventor of the surf rock/50’s B movie musical instrument (the theremin). Not quite remote viewing, but with the right device it’s remote listening and I doubt many people were pointing to Leon Theremin as an inventor of spy apparata.

So in other words maybe remote viewing isn’t a realistic concern, but going by Clarke’s law there are probably many possible technologies that look a whole lot like remote viewing. In this view, it’s worth asking what it would take to defend against RV as an indication of what it would take to defend against these other technologies.