Risky Move

At dinner with a visiting speaker a few years ago, I happened to mention that beginners rarely take enough risks in the opening moves of a backgammon game. I think a normal person, even one who hasn’t played much backgammon, would have an idea what I meant by that. But academics have very particular ideas about terms that have taken on specific technical meanings in their discipline, and so the visitor said “Risky? That doesn’t make any sense: it’s a game with only two outcomes.” For purposes of this discussion let’s pretend he was right about the two outcomes, although this isn’t quite true. My purpose, after acknowledging that his statement makes sense under a certain formal view of risk, is to argue that we should be flexible enough to talk about a different notion of “risky move” which under some circumstances is a much better match for common parlance.

When economists talk about risk, we talk about uncertain monetary outcomes and an individual’s “risk attitude” as represented by a utility function. The shape of the function determines how willing the individual is to accept risk. For instance, we ask students questions such as “How much would Bob pay to avoid a 10% chance of losing $10,000?” and this depends on Bob’s utility function. If a game has only two outcomes, though, risk attitude becomes irrelevant. You should simply make whatever move gives a higher probability of winning. Note that your utility function would certainly enter into a decision of how much to bet on a game, or whether to play at all, but once the game starts you are simply “in it to win it.”

In common parlance, though, there are many risky moves even in games with binary outcomes. Standard examples in sports are “going for it” on 4th down in American football, or pulling the goalie in hockey or soccer. In backgammon, one often decides to risk exposing a lone checker (“blot”), accepting the danger of being hit in exchange for a more likely gain. Even in the non-chance game of chess, one might “risk” bringing out the queen earlier than usual. No one would hesitate to call these tactics “risky” in everyday conversation. As I said, they don’t fit the usual notion of risk in economics, but let’s not simply label all of these common usages “wrong.” I think it’s much more constructive to define an alternative notion of risk and learn something about the similarities and differences between the two notions.

Once you think about it, it’s obvious that what “risky move” means in many contexts is that the move increases the near-term variance in your probability of winning. The backgammon literature refers to the “volatility” of a move as the standard deviation of your equity (expected value of monetary outcome) following your opponent’s next move. This clearly measures the kind of thing we’re getting at when we call a move risky. So, if we want to make a distinction, maybe we should try to use the word “volatile” as the technical term, while realizing that everyday language will continue to use “risk” for multiple meanings. Now, this notion of volatility comes with a normative recommendation: Ignore it. Maximize expectation. After all, you aren’t consuming your medium-term expected value, only your final outcome.

Anytime two distinct concepts carry the same terminology, there is an error waiting to happen, of carrying reasoning that should apply only to one concept over to the other. Here it would be the error of avoiding volatile moves out of risk aversion. A behavioral conjecture: players will favor less volatile moves when monetary stakes are larger. I wonder if there is evidence for this. I’m not a big soccer fan, but I think I’ve heard that teams get more cautious in big matches…if so, this is a fallacy. They would be not avoiding risk but merely postponing it, potentially to a shootout phase.

Volatility could also be called tactical risk. There is another kind of risk which I think deserves to be treated separately, which could be called “strategic risk”. This is variance in a variable on which your winning probability depends in a concave or convex way. The simplest example is that in almost any game, win probability is concave in points for the leading team, convex for the trailing team. Hence there are frequent references to the leading team playing it safe and the trailing team taking risks. This is of course perfectly sound strategy. I just find it interesting that here is another perfectly useful notion of risk which could pedantically be called “wrong” by the textbook definition. Perhaps more on this in a future post.

The economists’ definition of risk seems to require two numerical values of an outcome: An objective value (monetary outcome) and a subjective value (utility). `Risky attitude’ means a utility which is a convex function of the objective value.

If I understand correctly, your use of word `risk’ is essentially the same, only you apply it to a different set of outcomes then {win, loss}: In the case of tactical risk the objective value is the probability of winning in the future, and beginners tend to behave as if they maximize a utility which is convex in this value. In the case of strategic risk, the objective value is the amount of points.

So the difference between your views seems to me not so much about the meaning of risk, but the fact that this visitor only applies the definition of risk to the final outcome of the game and you apply it also for intermediate outcomes which are not binary.

[…] he might be just thinking about it backwards. The second serve is their best serve, but nevertheless it is a “backing-off” from their first serve because their first serve is (intentionally) excessively risky. […]

Andrew Gelman cited this post, but rather than address any of my actual points he excerpted a random paragraph which was merely “setting the scene” and used it as an excuse to beat on a favorite hobbyhorse (expected utility).

This annoyed me, of course, and I was rather satisfied with my reply in his comments section:
————————-
Hi, I guess I should check the incoming links on our blog more often. Here I am a month late, but better than never.

Andrew, if you read the whole post, you know that I was briefly introducing the standard setup, not to promote it, but quite the opposite, to point out that it gives an overly narrow definition of risk. My point, which I won’t detail here, is ultimately orthogonal to your well-known points (which are valid also), so I think I had something to say even to someone who considers expected utility discredited. To take your metaphor from the title, I was not discussing intercontinental navigation, but city planning, and for this a flat-earth model is actually superior since it avoids carrying around small terms which distract from the analysis.

In the brief quote that you included, I was simply making a factual statement about the kinds of exercises that students do. In such exercises, Bob’s decision certainly “depends on his utility function,” because Bob obviously isn’t a real person but a mathematical object. Whether Bob is a good or poor model of three-dimensional human beings is a totally separate question, and one I certainly wasn’t weighing in on in that paragraph. So, it is a bit jarring to see this rather uncontroversial paragraph, which doesn’t even hint at what the actual point of my article was, followed by “this is completely wrong,” with no further reference made to anything which actually was the topic of my post. I appreciate that you said expected-utility theory isn’t my “fault,” but I think you still may have left many readers with the impression that I was promoting the orthodoxy, oblivious to any flaws, when my post goes in quite a different direction.

You should know that at economic theory seminars many, perhaps most models use expected utility as one component; that everyone in the room is aware of the critiques you mention; and that no one stalks out of the room when the model is introduced, muttering about a flat earth. This isn’t because we are slaves to an orthodoxy, but because we know that when there are many other complications, it may be a good idea to look at a problem on a flat earth before proceeding to spheres or tori.