Probabilistic reasoning and the magic of 50/50

There’s a surfeit of studies showing that puny humans are inefficient when reasoning under uncertainty. Lately a lot of attention has been given to tail risks. Rare events can have huge consequences. Furthermore, it’s hard to put meaningful probabilities on some of these events — how do you assign a probability to the event that the Greenland ice sheet melts? What would such a probability even mean?

On a more basic level, we’re bad at reasoning even when probabilities are known and not extreme. The best evidence for this is the continued existence of roulette. Would it be enough to get punters to agree that on any given spin, all 38 (or 37) numbers are equally likely? If they’re just betting red or black, then I think so. Counting shows that less than half of the numbers are red, so you can convince someone that even money is losing in the long-run.

But most inveterate roulettists (I’m related to a few) don’t bet this way. They have more complicated schemes that involve covering weird subsets of the numbers. To explain that these bets are a bad idea, you have to get into expectation*. I might be unemployed, yet I still don’t have the time to teach expected values to everyone in my extended family.

On one level, then, probabilistic reasoning that treats outcomes as either deterministic or coinflips improves upon purely deterministic reasoning. On another, though, we make the 50% mark too magical — at least I do sometimes, like when reading weather forecasts. When the chance of rain is at least 50%, I’ll act as if rain is certain. When it’s less than 50%, I’ll act as if it’s certain there’ll be no rain. It’s a computationally cheap heuristic, since I don’t want to do a cost-benefit analysis every time I’m deciding whether or not to go out tomorrow. Still, it’s a heuristic that’s been resistant to improvement, even as it’s become clear to rain-phobic me that a 20% or 30% of rain should be a strong enough deterrent.

So I’m wary of overprivileging the 50% mark in attempts to explicate the probability scale (Nick Barrowman has a good one). Not only do probabilities of 50% usually occur in artificial situations (or in artificial models for situations), but payoffs are usually uneven. Put another way, a 0-50-100 scale is better than a 0-100 scale, but once you have more than one intermediate point, 50 shouldn’t be special anymore.

*I guess you could decompose bets on multiple numbers into bets on singletons, and make the point through equally likely outcomes, but good luck with that.