Making Responsible Decisions (When it Seems that You Can't)
Engineering Design and Strategic Planning Under Severe Uncertainty

What happens when the uncertainties facing a decision maker are so severe that the assumptions in conventional methods based on probabilistic decision analysis are untenable? Jim Hall and Yakov Ben-Haim describe how the challenges of really severe uncertainties in domains as diverse as climate change, protection against terrorism and financial markets are stimulating the development of quantified theories of robust decision making.

This is the opening paragraph of a paper posted on the web site FloodRiskNet in the UK since November 2007. I submitted my critical comments on this paper twice, but never got any response other than an "out of office" AutoReply (Feb 8, 2008).

So, no big deal.

I'll comment on this paper -- and related issues -- here. This discussion is intimately related to my campaign to contain the spread of

The point of Hall and Ben-Haim's (2007) paper is to advocate the use of Info-Gap decision theory for responsible decision-making under severe uncertainty.

It is important therefore to point out, especially for the benefit of readers who are not familiar with Info-Gap Decision Theory, that this theory turns a blind eye to the universal

GIGO Axiom

Garbage In --- Garbage Out

As a consequence it does not follow the well known maxim:

GIGO Corollary

The results of an analysis are only as good as the estimates on which they are based.

Hence, the info-Gap rhetoric would have us believe that, apparently by some miracle, an analysis conducted in the immediate neighborhood of a wild guess can generate results that are ... meaningfull/ worthwhile/ useful / reliable etc.!!!

But the most astounding thing of all is that, in their paper, Hall and Ben-Haim (2007) do not provide even a single reference to the very relevant and thriving field of Robust Optimization.

The title of the article suggests that the methodology that it promotes,
namely Info-Gap, is almost too good to be true.

After all, we are led to believe that there is a new methodology out
there that is capable of generating robust decisions in situations where
"... the uncertainties facing a decision maker are so severe that the
assumptions in conventional methods based on probabilistic decision
analysis are untenable ..."

With this as the appetizer, what should we expect for the main course?
Would it be a description of a breakthrough in decision-making under
severe uncertainty or ... a recipe for voodoo decision making?

Let's see.

Classical decision theory offers the Maximin paradigm as a "natural"
framework for dealing with robust decision-making under severe
uncertainty. And as we know only too well, the price tag for the ultimate
robustness provided by this paradigm is significant: the worst case
philosophy underpinning Maximin can result in extremely conservative
decisions. This is the familiar consequence of over-protection.

So the question arises: how is it that Maximin, the stalwart of robust
decision-making, is not mentioned, let alone discussed, in the article?
Indeed, how is it that this paradigm is not mentioned in the two editions
of the Info-Gap book?

Readers interested in this aspect of Info-Gap are welcome to visit my
website to read more about this intriguing phenomenon. Here it suffices to
point out that Info-Gap's generic model is -- surprise surprise -- a
simple Maximin model (The formal proof is 2 lines long).

This is the good news.

The bad news is that Info-Gap's generic model conducts its analysis in the
immediate neighborhood of the nominal value of the parameter of interest,
hence the worst-case analysis a la Maximin conducted by Info-Gap is local
in nature. In other words, Info-Gap deploys a definition of robustness
that does not attempt to explore the entire region of uncertainty. It
focusses on the nominal value of the parameter of interest and its
immediate neighborhood.

Since under severe uncertainty the nominal value is a poor indication of
the true value of the parameter of interest and is likely to be
substantially wrong, it follows that actually robustness a la Info-Gap
does not deal with severe uncertainty: it simply ignores it.

This fundamental flaw can be vividly illustrated by inspection: the
results generated by the generic Info-Gap model are invariant with an
increase in the actual size of the region of uncertainty. Thus, if you
increase the size of the actual region of uncertainty, say ten-fold, the
analysis and the results are not sensitive to this change. And if you
discover that actually the region of uncertainty should be increased
100-fold, or 2034578-fold, still this has no impact whatsoever on the
analysis and the results generated by the generic Info-Gap model.

In short, the "breakthrough" reported on in the article is not a
breakthrough at all. Rather, the article promotes a methodology that is
based on a simple Maximin model whose effective region of uncertainty is
concentrated around a nominal value of the parameter of interest. Since
under severe uncertainty this nominal value is a "wild guess", we have
no choice but assume that the results generated by this methodology are
also wild guesses.

Now back to the title of the article.

Given that Info-Gap's generic model is a simple Maximin model and given
the local nature of the uncertainty analysis, it would be more appropriate
to change "Responsible" to "Irresponsible".

Flood risk analysis is subject to uncertainties, often severe, which have the potential to undermine engineering decisions. This is particularly true in strategic planning, which requires appraisal over long periods of time. Traditional economic appraisal techniques largely ignore this uncertainty, preferring to use a precise measure of performance, which affords the possibility of unambiguously ranking options in order of preference. In this paper we describe an experimental application of information-gap theory, or info-gap for short to a flood risk management decision. Info-gap is a quantified non-probabilistic theory of robustness. It provides a means of examining the sensitivity of a decision to uncertainty. Rather than simply presenting a range of possible values of performance, info-gap explores how this range grows as uncertainty increases. This allows considerably greater opportunity for insight into the behaviour of our model of option performance. The information generated may be of use in improving the model, refining the options, or justifying the selection of one option over the others in the absence of an unambiguous rank order. Secondly, we demonstrate the possibility of exploring the value of waiting until improved knowledge becomes available by constructing options that explicitly model this possibility.

Interestingly, but not surprisingly, the full paper follows the usual script. In particular, it makes no mention whatsoever of the fact that the most famous indeed classic non-probabilistic approach to decision-making under severe uncertainty is Wald's Maximin paradigm (circa 1940); and it gives not the slightest hint that Info-Gap's robustness model is in fact a simple Maximin model (see FAQs about info-gap decision theory).

This is really an amazing story!

Question: how long can scholars in the field keep their heads burried in the sand?

Longer than you might think, mate!

Of course, the trouble is that in academia voodoo theories can be propagated and perpetuated via research grants. In practice this involves the supervision of graduate students — particularly PhD students — and Post-Docs. So, it may well be that the current crop of PhD students and Post-Docs working on Info-Gap decision theory will sustain its growth for a while.

Note that the completion of a PhD takes at least three years. So if these two projects are taken up by students, we should expect a stream of Info-Gap publications on these topics for at least the next ... five years (see What's next?).

My view on this is rather cynical.

I believe that the pressure on academics, especially when it comes to procuring research grants, is such that they are driven to portray the methods that they propose/develop as new and revolutionary. For think about it: what is the likelihood of your obtaining a grant if you explain in your grant application clearly, in plain words, that your research will be based on an old, mainstream theory?

In any case, we shall have to wait and see. Only time will tell (see What's next?).

But ... there is also some good news!

It appears that Hall and Harvey (2009) have taken note of my criticism of Info-Gap's robustness model. So much so that they have deemed it necessary to incorporate in their paper the following qualification regarding Info-Gap's robustness model:

An assumption remains that values of u become increasingly unlikely as they diverge from û.

This is a very significant development!

And yet, in spite of the fact that this assumption explicitly introduces a new dimension to the thinking underlying Info-Gap's robustness model, it is not nearly strong enough to correct the fundamental flaws in this thinking. Consequently, it is unable to correct the flaws in the robustness model (see FAQ-76.).

But more than this, this assumption sharply contradicts Ben-Haim's many statements categorically banning any talk of likelihood in the context of Info-Gap's uncertainty and robustness model. For instance (emphasis is mine):

In info-gap set models of uncertainty we concentrate on cluster-thinking rather than on recurrence or likelihood. Given a particular quantum of information, we ask: what is the cloud of possibilities consistent with this information? How does this cloud shrink, expand and shift as our information changes? What is the gap between what is known and what could be known. We have no recurrence information, and we can make no heuristic or lexical judgments of likelihood.

The grant covers the period 1 December 2008 — 30 November 2013, so it looks like the Info-Gap saga will continue for at least 5 more years!

As expected, such grants will keep info-gap decision theory alive for a while. The latest product is another peer-reviewed article by Daniel Hine and Jim W. Hall (2010). See my review report on this gem.

Remark:

It will be interesting to see how Hall and Ben-Haim reconcile the fundamental difference between their views on embedding a "likelihood" structure in Info-Gap's uncertainty model. Recall (see Saga continues ... ) that whereas Ben-Haim categorically prohibits any attribution of likelihood to Info-gap's uncertainty model, Hall now (January 2009) adds an assumption that embeds a very strong (monotonic) likelihood structure in the model.

I am extremely pleased that, apparently for the first time, an official Government commissioned report takes notice of my criticism of Info-Gap decision theory.

Sadly, it is not an Australian report!

As they say, you can't be a prophet in your own Land!

What a pity! What a waste!

I do hope, though, that AU government agencies that sponsor info-gap projects will follow suit soon. It is long overdue.

In any case, The following paragraph is a quote from page 75 of the DEFRA report:

More recently, Info-Gap approaches that purport to be non-probabilistic in nature developed by Ben-Haim (2006) have been applied to flood risk management by Hall and Harvey (2009). Sniedovich (2007) is critical of such approaches as they adopt a single description of the future and assume alternative futures become increasingly unlikely as they diverge from this initial description. The method therefore assumes that the most likely future system state is known a priori. Given that the system state is subject to severe uncertainty, an approach that relies on this assumption as its basis appears paradoxical, and this is strongly questioned by Sniedovich (2007).

The diplomatic language of the report cannot hide the obvious conclusions!

Needless to say, Ben-Haim will adamantly disagree with Hall and Harvey's (2009) position regarding the need for a likelihood structure to justify the validity of the local nature of Info-Gap's robustness model.

The point is that whereas Hall and Harvey (2009) reached the (right) conclusion and decided to spell out clearly the logic behind info-gap's local approach to robustness, Ben-Haim insists on hiding this from the public!

If you are taking it for granted that the quest for a magic formula capable of transforming severe lack of knowledge / information into substantial knowledge was abandoned with the Enlightenment, I have news for you!

Apparently, against all scientific odds, Info-Gap scholars were successful in imputing likelihood to results generated by a non-probabilistic model that is completely devoid of any notion of likelihood!

Recall that Info-Gap decision theory prides itself on being non-probabilistic and likelihood-free. Yet, Info-gap scholars -- the Father of Info-Gap included -- now claim that Info-Gap's robustness model is capable of identifying decisions that are most likely to satisfy a given performance requirement.

Information-gap (henceforth termed 'info-gap') theory was invented to assist decision-making when there are substantial knowledge gaps and when probabilistic models of uncertainty are unreliable (Ben-Haim 2006). In general terms, info-gap theory seeks decisions that are most likely to achieve a minimally acceptable (satisfactory) outcome in the face of uncertainty, termed robust satisficing. It provides a platform for comprehensive sensitivity analysis relevant to a decision.

For, until now we have been warned repeatedly by Info-Gap scholars that no likelihood must be attributed to results generated by Info-Gap decision models. Indeed, we have been advised that this would be deceptive and even dangerous (emphasis is mine):

However, unlike in a probabilistic analysis, r has no connotation of likelihood. We have no rigorous basis for evaluating how likely failure may be; we simply lack the information, and to make a judgment would be deceptive and could be deceptive and dangerous. There may definitely be a likelihood of failure associated with any given radial tolerance. However, the available information does not allow one to assess this likelihood with any reasonable accuracy.

This point is also made crystal clear in the second edition of the Info-Gap book (emphasis is mine):

In info-gap set models of uncertainty we concentrate on cluster-thinking rather than on recurrence or likelihood. Given a particular quantum of information, we ask: what is the cloud of possibilities consistent with this information? How does this cloud shrink, expand and shift as our information changes? What is the gap between what is known and what could be known. We have no recurrence information, and we can make no heuristic or lexical judgments of likelihood.

I fear though -- in view of my experience of the past 40 years - that the danger is that the huge success of the Black Swan will inspire a new wave of voodoo decision theories, purportedly capable of ... "domesticating" black swans and preempting the discovery of ... purple swans!

We shall have to wait and see.

For those who have "been in hiding" I should note that Taleb has become quite a celebrity. According to the Prudent Investor Newsletters (Tuesday, June 3, 2008):

Mr. Taleb charges about $60,000 per speaking engagement and does about 30 presentations a year to "to bankers, economists, traders, even to Nasa, the US Fire Administration and the Department of Homeland Security" according to Timesonline’s Bryan Appleyard.

He recently got $4million as advance payment for his next much awaited book.

Earned $35-$40 MILLION on a huge Black Swan event-on the biggest stockmarket crash in modern history-Black Monday, October 19,1987.

So, if you haven’t heard him in person you can easily find on the WWW numerous videos of his interviews.

Here is a link to a very short (2:45 min) clip, recorded by Taleb himself, apparently at Heathrow Airport, of 10 tips on how to deal with Black Swans, and life in general.

Scepticism is effortful and costly. It is better to be sceptical about matters of large consequences, and be imperfect, foolish and human in the small and the aesthetic.

Go to parties. You can't even start to know what you may find on the envelope of serendipity. If you suffer from agoraphobia, send colleagues.

It's not a good idea to take a forecast from someone wearing a tie. If possible, tease people who take themselves and their knowledge too seriously.

Wear your best for your execution and stand dignified. Your last recourse against randomness is how you act -- if you can't control outcomes, you can control the elegance of your behaviour. You will always have the last word.

Don't disturb complicated systems that have been around for a very long time. We don't understand their logic. Don't pollute the planet. Leave it the way we found it, regardless of scientific 'evidence'.

Learn to fail with pride -- and do so fast and cleanly. Maximise trial and error -- by mastering the error part.

Avoid losers. If you hear someone use the words 'impossible', 'never', 'too difficult' too often, drop him or her from your social network. Never take 'no' for an answer (conversely, take most 'yeses' as 'most probably').

Don't read newspapers for the news (just for the gossip and, of course, profiles of authors). The best filter to know if the news matters is if you hear it in cafes, restaurants ... or (again) parties.

Hard work will get you a professorship or a BMW. You need both work and luck for a Booker, a Nobel or a private jet.

Answer e-mails from junior people before more senior ones. Junior people have further to go and tend to remember who slighted them.

It is interesting to juxtapose Prof. Taleb’s thesis in The Black Swan that severe uncertainty makes (reliable) prediction in the Socio/economic/political spheres impossible, with the polar position taken by his colleague, Prof. Bruce Bueno de Mesquita, who actually specializes in predicting the future.

One thing for sure: Sooner or later info-gap scholars will find a simple reliable recipe for handling Black Swans!

Not only professionals specializing in "decision under uncertainty", but also the proverbial "man in the street", take it for granted that the ability to accurately predict future events is one of the most onerous challenges facing humankind — especially persons in authority, persons responsible for the management of business or economic organizations etc.

" ... Bruce Bueno de Mesquita is a political scientist, professor at New York University, and senior fellow at the Hoover Institution. He specializes in international relations, foreign policy, and nation building. He is also one of the authors of the selectorate theory.

He has founded a company, Mesquita & Roundell, that specializes in making political and foreign-policy forecasts using a computer model based on game theory and rational choice theory. He is also the director of New York University's Alexander Hamilton Center for Political Economy.

He was featured as the primary subject in the documentary on the History Channel in December 2008. The show, titled Next Nostradamus, details how the scientist is using computer algorithms to predict future world events ..."

Here is an interview with Prof. Bueno de Mesquita (with Riz Khan - The art and science of prediction - 09 Jan 08):

Apparently, all you need to accomplish this is a computer, expert-knowledge on Iran, and game theory!

Some of the predictions attributed to Prof. Bueno de Mesquita are:

The second Palestinian Intifada and the death of the Mideast peace process, two years before this came to pass.

The succession of the Russian leader Leonid Brezhnev by Yuri Andropov, who at the time was not even considered a contender.

The voting out of office of Daniel Ortega and the Sandanistas in Nicaragua, two years before this happened.

The harsh crack down on dissidents by China's hardliners four months before the Tiananmen Square incident.

France's hairs-breadth passage of the European Union's Maastricht Treaty.

The exact implementation of the 1998 Good Friday Agreement between Britain and the IRA.

China's reclaiming of Hong Kong and the exact manner the handover would take place, 12 years before it happened.

Impressive, isn't it!

As might be expected, these and similar claims by Prof. Bueno de Mesquita have sparked a vigorous debate not only in the professional journals but also on the WWW. Interested readers can consult this material to see for themselves, whether Bueno de Mesquita's claims attest to a major scientific breakthrough or ... voodoo mathematics.

I am a little skeptical about anyone who claims to have a 90% success rate. I just don't buy it. Especially when they say that they can explain away a lot of the other 10%.

If you come to me and tell me you have a model that gets it right 60% or 70% of the time, I may listen. Skeptically, but I will listen. 90% and I start to smell something.

All I wish to add here is that Prof. Bueno de Mesquita (left) makes his predictions under conditions of "severe uncertainty" which of course render them hugely vulnerable to what Prof. Naseem Taleb (right) dubs the Black Swan phenomenon.

Hence, the very proposition that such predictions can be made at all, let alone be reliable, is diametrically opposed to Nassim Taleb's categorical rejection of any such position. For his thesis is that Black Swans are totally outside the purview of mathematical treatment, especially by models that are based on expected utility theory and rational choice theory.

Even more interesting is the fact that Nassim Taleb (right) and Bueno de Mesquita (left) are staff members of the same academic institution, namely New York University. So, all that's left to say is: Go figure!

As indicated above, the debate over Bueno de Mesquita's theories is not new. It has been ongoing, in the relevant academic literature, at least since the publication of his book The War Trap (1981).

According to the Associated Press, the latest (2009, Mar 4, 4:39 AM EST) news from Russia about the future of the USA is that

" ... President Barack Obama will order martial law this year, the U.S. will split into six rump-states before 2011, and Russia and China will become the backbones of a new world order ..."

Apparently this prediction was made by Igor Panarin (right), Dean of the Russian Foreign Ministry diplomatic academy and a regular on Russia's state-controlled TV channels (see full AP news report).

Regarding the future of Russia,

"You don't sound too hopeful".
"Hopeful? Please, I am Russian. I live in a land of mad hopes, long queues, lies and humiliations. They say about Russia we never had a happy present, only a cruel past and a quite amazing future ..."

Malcolm Bradbury
To the Hermitage (2000, p. 347)

We should therefore be reminded of J K Galbraith's (1908-2006) poignant observation:

There are two classes of forecasters: those who don't know and those who don't know they don't know.

And in the same vein,

The future is just what we invent in the present to put an order over the past.

Malcolm Bradbury
Doctor Criminale (1992, p. 328)

So, we shall have to wait and see.

And how about this more recent piece by Heath Gilmore and Brian Robins in the Sydney Morning Herald (March 27, 2009):

"... COUPLES wondering if the love will last could find out if theirs is a match made in heaven by subjecting themselves to a mathematical test.

A professor at Oxford University and his team have perfected a model whereby they can calculate whether the relationship will succeed.

In a study of 700 couples, Professor James Murray, a maths expert, predicted the divorce rate with 94 per cent accuracy.

His calculations were based on 15-minute conversations between couples who were asked to sit opposite each other in a room on their own and talk about a contentious issue, such as money, sex or relations with their in-laws.

Professor Murray and his colleagues recorded the conversations and awarded each husband and wife positive or negative points depending on what was said. ..."

Such interviews should perhaps be made mandatory for all couples registering their marriage.

More details on the mathematics of marriage can be found in The Mathematics of Marriage: Dynamic Nonlinear Models by J.M. Gottman, J.D. Murray, C. Swanson, R. Tyson, and K.R. Swanson (MIT Press, Cambridge, MA, 2002.)

Sniedovich, M. (2008) FAQS about Info-Gap Decision Theory, Working Paper No. MS-12-08, Department of Mathematics and Statistics, The University of Melbourne, (PDF File)

Sniedovich, M. (2008) A Call for the Reassessment of the Use and Promotion of Info-Gap Decision Theory in Australia (PDF File)

Sniedovich, M. (2008) Info-Gap decision theory and the small applied world of environmental decision-making, Working Paper No. MS-11-08
This is a response to comments made by Mark Burgman on my criticism of Info-Gap (PDF file )

Ben-Haim's response confirms my assessment of Info-Gap. It is clear that Info-Gap is fundamentally flawed and therefore unsuitable for decision-making under severe uncertainty.

Ben-Haim is not familiar with the fundamental concept point estimate. He does not realize that a function can be a point estimate of another function.

So when you read my papers make sure that you do not misinterpret the notion point estimate. The phrase "A is a point estimate of B" simply means that A is an element of the same topological space that B belongs to. Thus, if B is say a probability density function and A is a point estimate of B, then A is a probability density function belonging to the same (assumed) set (family) of probability density functions.

Ben-Haim mistakenly assumes that a point estimate is a point in a Euclidean space and therefore a point estimate cannot be say a function. This is incredible!

A formal proof that Info-Gap is Wald's Maximin Principle in disguise. (December 31, 2006)
This is a very short article entitled Eureka! Info-Gap is Worst Case (maximin) in Disguise! (PDF File)
It shows that Info-Gap is not a new theory but rather a simple instance of Wald's famous Maximin Principle dating back to 1945, which in turn goes back to von Neumann's work on Maximin problems in the context of Game Theory (1928).

A proof that Info-Gap's uncertainty model is fundamentally flawed. (December 31, 2006)
This is a very short article entitled The Fundamental Flaw in Info-Gap's Uncertainty Model (PDF File).
It shows that because Info-Gap deploys a single point estimate under severe uncertainty, there is no reason to believe that the solutions it generates are likely to be robust.

A math-free explanation of the flaw in Info-Gap. ( December 31, 2006)
This is a very short article entitled The GAP in Info-Gap (PDF File).
It is a math-free version of the paper above. Read it if you are allergic to math.

If your organization is promoting Info-Gap, I suggest that you invite me for a seminar at your place. I promise to deliver a lively, informative, entertaining and convincing presentation explaining why it is not a good idea to use — let alone promote — Info-Gap as a decision-making tool.

Here is a list of relevant lectures/seminars on this topic that I gave in the last two years.

Disclaimer: This page, its contents and style, are the responsibility of the author (Moshe Sniedovich) and do not represent the views, policies or opinions of the organizations he is associated/affiliated with.