The Allais Paradox

Maurice Allais, a Nobel prize winning economist, died earlier this month. In this post, I’m going to focus on one of his many intellectual contributions, as it profoundly influenced modern psychology. It’s known as the Allais Paradox, and it was first outlined in a 1953 Econometrica article. Here’s an example of the paradox:

Suppose somebody offered you a choice between two different vacations. Vacation number one gives you a 50 percent chance of winning a three-week tour of England, France and Italy. Vacation number two offers you a one-week tour of England for sure.

Not surprisingly, the vast majority of people (typically over 80 percent) prefer the one-week tour of England. We almost always choose certainty over risk, and are willing to trade two weeks of vacation for the guarantee of a one-week vacation. A sure thing just seems better than a gamble that might leave us with nothing. But how about this wager:

Vacation number one offers you a 5 percent chance of winning a three week tour of England, France and Italy. Vacation number two gives you a 10 percent chance of winning a one week tour of England.

In this case, most people choose the three-week trip. We figure both vacations are unlikely to happen, so we might as well go for broke on the grand European tour. (People act the same way with lotteries: we typically buy the ticket for the biggest possible prize, regardless of the odds.)

Allais presciently realized that this very popular set of decisions – almost everybody made them – violated the rational assumptions of economics. Instead of making decisions that could be predicted by a few mathematical equations, people acted with frustrating inconsistency. After all, both questions involve 50 percent reductions in probability (from 100 percent to 50 percent, and from 10 percent to 5 percent), and yet generated completely opposite responses. Our choices seemed incoherent.

The Allais paradox was mostly ignored for the next two decades. But then, in the early 1970s, two Israeli psychologists, Daniel Kahneman and Amos Tversky, read about the paradox and were instantly intrigued: they wanted to know why people didn’t respond to probabilities in a linear manner. Based upon their conversations with each other, it seemed obvious that people perceived a smaller difference between probabilities of 1 percent and 2 percent than between 0 percent and 1 percent, or between 99 percent and 100 percent. In other words, all changes in risk are not created equal. As Allais had observed decades before, we value complete certainty an inordinate amount.

But why was certainty so attractive? Kahneman and Tversky wanted to understand the psychology behind the paradox. Their breakthrough came by accident. Kahneman had been reading a textbook on economic utility functions, and was puzzled by the way economists explained a particular aspect of our behavior. When evaluating a gamble—like betting on a hand of poker, or investing in a specific stock—economists assumed that we made the decision by taking into account our wealth as a whole. (Being rational requires factoring in all the relevant information.) But Kahneman realized that this isn’t how we think. Gamblers in Las Vegas don’t sit around the card table contemplating their complete financial portfolio. Instead, they make quick decisions that depend entirely upon the immediate terms of the gamble. If there is a $100 wager, and you’re trying to decide whether or not to ante in with a pair of aces, you probably aren’t thinking about the recent performance of your mutual fund, or the value of your home.

But if we don’t make decisions based upon a complete set of information, then what are our decisions based upon? Which factors were actuallyaffecting our choices? Kahneman and Tversky realized that people thought about alternative outcomes in terms of gains or losses, and not in terms of states of wealth. The gambler playing poker is only concerned with the chips right in front of him, and the possibility of winning (or losing) that specific amount of money. (The brain is a bounded machine, and can’t think about everything at once.) This simple insight led Kahneman and Tversky to start revising the format of their experiments. At the time, they regarded this as nothing but a technical adjustment, a way of making their questionnaires more psychologically realistic.

This minor change in notation soon revealed one of the most important discoveries of their careers. When Kahneman and Tversky framed questions in terms of gains and losses, they immediately realized that people hated losses. In fact, our dislike of losses was largely responsible for our dislike of risk in general. Because we felt the disadvantages of risky decisions (losses) more acutely than the advantages (gains), most risks struck us as bad ideas. This also made options that could be forecast with certainty seem especially alluring, since they were risk-free. As Kahneman and Tversky put it, “In human decision making, losses loom larger than gains.” They called this phenomenon “loss aversion”

This simple idea has profound implications. For one thing, it reveals a deep bias built into our brain. From the perspective of economics, there is no good reason to weight gains and losses so differently. Opportunity costs (foregone gains) should be treated just like “out-of-pocket costs” (losses). But they aren’t – losses carry a particular emotional sting. Take this imaginary scenario:

The U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If program A is adopted, 200 people will be saved. If program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. Which of the two programs would you favor?

When this question was asked to a large sample of physicians, 72 percent chose option A, the safe-and-sure strategy, and only 28 percent chose program B, the risky strategy. In other words, physicians prefer a sure good thing over a gamble that risks utter failure. They are acting just like the people who choose the certain one week tour of England. But what about this scenario:

The U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If program C is adopted, 400 people will die. If program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die. Which of the two programs would you favor?

These two different questions examine identical dilemmas. Saving one third of the population is the same as losing two thirds. But when Kahneman and Tversky framed the scenario in terms of losses, physicians reversed their previous decision. Only 22 percent voted for option C, while 78 percent of them opted for option D, the risky strategy that might save everyone. Of course, this is a ridiculous shift in preference, as nothing substantive has changed in the scenario.* But our choices are guided by our feelings, and losses just make us feel bad. Because the coldhearted equations of classical economics neglect emotion, their description of our decisions remained woefully incomplete.

And this returns us to Maurice Allais. It would be easy to dismiss his paradox as a trifling issue, an irrelevant foible of human decision-making. But it actually helped lead to a radical revision of human nature. (Daniel Kahneman went on to win the Nobel Prize in 2002.) We’ve come to realize that we’re not nearly as rational as we like to believe, that the brain is driven by all sorts of inarticulate feelings and pre-programmed instincts. It’s worth noting, however, that the modern investigation into our irrationality didn’t begin with a brain scan, or with discussions of the amygdala. Instead, it began with a few inconsistent people, making economic decisions about their vacation. Leonard Cohen said it best: “There’s a crack in everything – that’s how the light gets in.” Allais found an important crack.

*Patients exhibit a similar bias: When asked whether they would choose surgery in a hypothetical medical emergency, twice as many people opted to go under the knife when the chance of survival was given as 80 percent than when the chance of death was given as 20 percent.