I have often seen discussions of what actions to take in the context of rare events in terms of expected value. For example, if a lottery has a 1 in 100 million chance of winning, and delivers a positive expected profit, then one "should" buy that lottery ticket. Or, in a an asteroid has a 1 in 1 billion chance of hitting the Earth and thereby extinguishing all human life, then one "should" take the trouble to destroy that asteroid.

This type of reasoning troubles me.

Typically, the justification for considering expected value is based on the Law of Large Numbers, namely, if one repeatedly experiences events of this type, then with high probability the average profit will be close to the expected profit. Hence expected profit would be a good criterion for decisions about common events. However, for rare events, this type of reasoning is not valid. For example, the number of lottery tickets I will buy in my lifetime is far below the asymptotic regime of the law of large numbers.

Is there any justification for using expected value alone as a criterion in these types of rare events?

EDIT: As many have pointed out, the article in Slate discusses many issues, and it is not fair to say that this article subscribes to this point of view. However, there are other sources which do appear to subscribe to it.

By the way, Qiaochu, there are totally people who really believe in maximizing EV at any expense! See e.g. the "shut up and multiply" school around the blog Less Wrong. lesswrong.com/lw/n3/circular_altruism
–
JSEDec 31 '10 at 21:43

1

@JSE: huh. I have to disagree with the example; he's not even using a nontrivial utility function. Whether I choose the first or second option depends heavily on whether these are the last 500 remaining humans, as far as I know. If they are, then I would act to minimize risk, not to maximize expected value, because my utility function would be heavily biased against the total extinction of humanity.
–
Qiaochu YuanDec 31 '10 at 22:29

2

The relative merits of maximizing expected utility get to the heart of one's extra-mathematical interpretation of probability, I think. So I basically agree with Gjergji. That said, the question is still interesting to consider, especially because it is not uncommon for expected utility maximization criteria to be accepted uncritically. In some fields, when someone deviates from this approach, it requires a lengthy defense. See, for example faculty.wcas.northwestern.edu/~cfm754/actualist_rationality.pdf
–
R HahnDec 31 '10 at 23:57

2

Anyway, the expectation is merely the first moment of your distribution: knowing the variance, and even better higher moments, will paint a more accurate description of the situation. This is the case in the lottery example, where the variance is huge.
–
Thierry ZellJan 1 '11 at 5:30

4

Every economist in the universe wants an answer to this question!
–
drbobmeisterJan 1 '11 at 6:09

10 Answers
10

The Kelly criterion is the optimal betting strategy for a player with limited resources (and if one had an infinite amount of capital they probably would not be interested in buying lottery tickets anyway).

A Kelly player bets a fractional amount of their capital $X$ on each trial as to maximize the expected value of $\log X$. The strategy does not depend on the number of trials so it does not matter if the negative (or positive) outcome is a rare event.

I really like this approach, because it generalizes the deterministic reasoning of the law of large numbers to rare events. There is no recourse to the artifice of utility functions.
–
David HarrisJan 3 '11 at 4:08

1

Nobel Prize-winning economist Paul A. Samuelson disagreed with this approach in his paper "Why we should not make mean log of wealth big though years to act are long" written entirely with words of one syllable*. Here's the official journal page and here's a free online copy. *There are a few intended exceptions like "Samuelson" and "syllable". Fun puzzle: find a two-syllable word in the paper that Samuelson missed.
–
aorqJan 5 '11 at 22:22

In my opinion expected value is a useful but far from definitive decision criterion in situations like this. In other words, I think a person deciding whether to buy a lottery ticket ought to compute the expected value, but doesn't need to obey its dictates. I allude to this in the Slate article you mention, which I wrote almost ten years ago:

"But there's an even deeper problem, which is this: It's by no means clear that the benefits of having 280 million are 280 million times as great as the benefits of holding onto your original dollar. In fact, those benefits vary from person to person. Some people will gladly trade a dollar for a 1 in 10 chance of $10. Some people would rather have the dollar. Mathematics can't tell you which of those two you should prefer, although psychologists have learned a lot about what people typically do prefer. Mostly, people are "risk-averse"—given the choices above, they'd keep their dollar."

Man, I forgot how many equations they used to let me put in my articles!

To me, expected value is useful only in its original context, when you get to do repeated independent trials and want an idea of how well you'll do. I don't see how it's useful for deciding whether to bet on a rare event or not unless you plan to make it a regular habit. Or is that what you're referring to?
–
Deane YangJan 1 '11 at 0:11

Various sets of rationality axioms (vonNeumann-Morgenstern, Savage, and others) imply that people behave so as to maximize expected utility. A differentiable utility function is (by definition) approximately linear over small ranges, so for small bets, maximizing expected utility is the same thing as maximizing expected value. For large bets, this reasoning of course does not apply.

( In case that was unclear---suppose you start with wealth I and consider a bet where you might win the amount x with probability p and lose the amount y with probability 1-p. The theorem says that there exists a function U such that you will take this bet if and only if pU(I+x) > (1-p)U(I-y). When x and y are sufficiently small, this is essentialy equivalent to px > (1-p)y. )

If I am not mistaken, this type of reasoning requires subscribing to a particular model of human psychology, the "rational behavior" as in classical economics. This model of psychology has little to no scientific basis. For small bets, the law of large numbers based reasoning requires only that people prefer more money, which seems much less contentious.
–
David HarrisDec 31 '10 at 17:23

David: This type of reasoning is based on certain axioms about preferences, of which the most contentious is this: If L = px+ (1-p)y is a lottery (that is, L gives you probability p of winning prize x and probability 1-p of winning prize y --- you should treat x and y as symbols here, not as real numbers) and M = qx+(1-q)y is another, you'll be indifferent between rL+(1-r)M on the one hand and (rp + (1-r)q)x + (r(1-p) + (1-r)(1-q))y on the other hand. This is certainly an assumption, but I'd hesitate to call it a psychological model.
–
Steven LandsburgDec 31 '10 at 17:33

Usually people want to balance both reward (e.g., expected return) and some measure of risk. To give just one example, in modern portfolio theory one talks about an "efficient frontier" of different strategies, in which variance of return is minimized for a given expected value, or expected value maximized for a given variance. A priori, no particular strategy on the frontier is considered optimal overall: the best for an individual depends on their risk/reward preferences. On the less mathematical side, look at any popular article about asset allocation--you'll see a lot of talk about personal "risk tolerance."

However, there are two special things about the Slate lottery article you mention. First, the article's author has contributed to Math Overflow in the past, and may have answers for you himself. (The internet is a small world...) In this case I'm guessing the main point was to riff on oft-overlooked factors that go into lottery value, not to give investment advice. For one thing, if it's a good idea on pure economic grounds to buy one lottery ticket, it's almost surely a good idea to buy 100, and no one seems to be recommending that :-)

That leads to the second special feature of lotteries: the real value, as far as I can tell, is that they are fun. If you buy a ticket, you can dream about being rich for a bit. Buying 100 tickets may multiply your odds of winning, but for whatever reason doesn't seem to increase the fun of daydreaming.

Thanks for all the interesting perspectives on this question. It seems that there are two quite distinct reasons for focusing on expected value. The first is based on the law of large numbers, and works fine for common events. This does not require any theoretical apparatus of utility functions or choice under uncertainty. This type of reasoning is on very firm ground.

There is a completely distinct approach, based on expected utility. Expected value is often used, sometimes ignorantly, as a proxy for expected utility. This approach seems more treacherous to me, as it requires postulating a specific utility function, and even more so, it requires making some very strong moral or psychological assumptions.

In many cases, it seems, the difference between the two approaches is silently ignored.

Which of the von Neumann-Morgenstern axioms do you consider a very strong moral or psychological assumption?
–
Steven LandsburgJan 1 '11 at 6:44

2

I guess I doubt the notion that a person can have preferences that are well-defined and consistent and which depend only on the physical objects (money, etc) under consideration, not the path to reach them or any "metadata" such as considerations of fairness etc. This is even more acute in the presence of uncertainty. Anyway, the scientific evidence (such as it is) clearly seems to be that people do not behave in this way.
–
David HarrisJan 1 '11 at 14:29

As far as lotteries are concerned, there was a recent article in the Monthly: Finding good bets in the lottery, and why you shouldn't take them by Aaron Abrams and Skip Garibaldi, showing how expected value is not enough to make a decision: you can have a lottery with a positive expected return rate (if the roll-over jackpot has reached a very high amount) and still buying a ticket is proved to be a bad investment. Note that this is more than just theoretical considerations, as some jack-pots have led to attempts to "buy out" the lottery by some parties.

It's a longish, but highly readable paper: they use some illuminating real data and work out a way to figure out a jackpot threshold that guarantees a positive expectation, then some basic portfolio analysis to show that even those positive cases cannot be good investments.

Of course, the St. Petersburg Paradox is already a good example of the limits of expectation, albeit more open to criticism, given its outlandish setup.

There’s a Streetlight effect (a joke told about economists as well as about drunks). Typically it is easier to test whether the first moment is positive than to test anything else. So, as a rule of thumb that’s helpful over a wide range of realistic inputs, it’s a good place to start.

If you want more then you need more data or preferences. For example, if you have a dozen children, is a p chance of six of them dying better or worse than a ½p chance of all twelve of them dying? The answer to that might affect how much effort should go into preventing asteroid-earth collisions.

You may be looking for the opposite of what you stated; Specifically, there is a game with infinite expected value, but it is not one that you are likely to play: See the St. Petersburg Paradox for an example. Alternatively, in the case of rare events such as winning the lottery, it may make sense for one to play because the positive utility that he/she receives from the excitement may outweigh the negative utility incurred from the financial cost.

My response before was the first idea that came to my mind, but I now want to elaborate with a more thoughtful answer. My general answer is that I don't believe that expected value for rare events can be applied uniformly to individuals. In fact, my point of view extends to certain instances with reasonable chances.

For example, consider a game that is only played once which involves a coin flip of a fair coin. If it lands on heads, you win $\$1002$, but if it lands on tails, you lose $\$1000$. The expected value here is of course $+\$1$ so it suggests that you should play. But of course, it is a meaningless value because you either win $\$1002$ or you lose $\$1000$, and that's the end of it. In short, you don't ask what the limit of a convergent sequence $\langle a_i| i \in \mathbb{N}\rangle$ is if you want to compute $a_0$.

Now you can argue against this point by saying that over the course of a lifetime, you will get many opportunities so you should take every favorable one that you can. However, the value of the loss or gain and the number of opportunities that one gets to make bets with such clear-cut odds depends on the individual's situation and preferences. In particular, there's an opportunity cost associated with the bet due to a limit on a person's financial resources.

Similarly if a person is offered two one-time games each yielding the same expected value but with one having greater variance, the choice the individual should make is dependent on the same considerations including the individual's appetite for risk. The expected value from a purely monetary point of view will not differentiate the games.

This captures the spirit of the ideas mentioned in other posts regarding utility and one-time games.

I think expected value is a much more important consideration for large insurance companies because they cover many people, they have a large financial resource pool, and they have a collective utility curve per dollar earned that is much more linear than that of an individual.

As far as cutoffs are concerned, I once proposed that cutoffs of expected value should be in accordance with the number of trials meaning that if the expected number of times an outcome would occur were less than $1/2$ given the number of times that the experiment were to be performed, then it should not be included in the expected value. The problem that was pointed out to me is that if we had a game with a huge number of distinct positive values each with very low probability but having a collective high probability, then this adjusted expected value would be biased in a very misleading way.

The St. Petersburg paradox can be resolved by an argument which doesn't resolve the lottery argument. All you have to do is assume any reasonable cutoff for the total amount of money you can actually be paid, and the expected value of the game becomes much smaller. But the quantities which are paid out in standard lotteries are, in fact, feasible to pay out.
–
Qiaochu YuanDec 31 '10 at 16:02

Backgammon (played for money with a doubling cube) has the same non-converging-expected-value problem, and people do play that.
–
Kevin O'BryantDec 31 '10 at 16:05

In the backgammon version of the paradox, the correct expected value not only doesn't coverge, it's an alternating series with increasing absolute value. That means that the correct decision relies very sensitively to precisely what financial cutoff you take. I suppose the correct thing to do, then, is to incorporate a smoothly decreasing likelihood of actually getting paid one's winnings, and a possibly different smoothly decreasing likelihood of actually paying one's losings. Some sociologist must have looked at which bets actually get paid!
–
Kevin O'BryantDec 31 '10 at 16:10