The Hot Hand Fallacy

In basketball, there is a belief among players that if someone goes on a streak and makes a series of consecutive shots, they are more likely to make their next shot. This belief was discussed and debunked in a landmark paper by Gilovich, Vallone, and Tversky, The Hot Hand in Basketball: On the Misperception of Random Sequences, leading to the term “hot hand fallacy” (by the way, this is the same Tversky who is profiled in Michael Lewis’s excellent new book, The Undoing Project).

However, since the original publication in 1985, this result has been challengednumerous times, and, as far as I can tell, there still isn’t a common consensus on whether the hot hand exists. Most recently, Miller and Sanjurjo have argued that the original paper missed a key mathematical bias that, when accounted for, would imply the existence of the hot hand. They conclude:

“Because researchers have: (1) accepted the null hypothesis that players have a fixed probability of success, and (2) treated the mere belief in the hot hand as a cognitive illusion, the hot hand fallacy itself can be viewed as a fallacy.”

I won’t attempt to answer whether the hot hand exists, but I did want to go over the counterintuitive result from Miller and Sanjurjo. They begin with a thought experiment. Say we flip a fair coin 100 times, and we would like to know the outcome that typically follows heads. So, whenever we flip a head, we get our pen and paper ready and write down the result of the following flip.

Question: What is the expected proportion of heads written on this piece of paper? Obviously one-half, right?

Answer: Less than one-half.

Counterintuitive, right? They walk through the result for 3 flips, showing the expected proportion of heads is (this screenshot is taken from the paper):

We can see how this bias applies to basketball – instead of coin tosses, we are dealing with basketball shot attempts. It turns out this generalizes to streaks longer than 1 as well (i.e. recording the result after consecutive successes) and to probabilities of success .

In fact, we can simulate this in a few lines in R. Setting nsims = 5000, n = 100, p = .5, we can use:

Running this code, I get expected_heads = .494, so sligtly less than half. Setting n = 3, we can verify expected_heads = .415, so right around .

We can also vary streak lengths and probabilites of success in R for different values of , corresponding to the length of the sequence. Here are my results for probabilities of success = 0.25, 0.50, 0.75 and streak lengths of = 1, 2, and 3, using 5,000 simulations (for the first graph, I used a fix streak of length = 1, and for the second, I used a fix probability of = 0.50):

As we can see, the bias exists for multiple values of , and it actually grows with the streak length . Therefore, applying this bias to basketball, if researchers only note the shots taken after (or something) successes, they must account that mathematically, the expected probability of success is lower than , that player’s typical shooting percentage. So, if they don’t account for this bias, they may come to the conclusion that a player is shooting worse than he actually is, which may result in ignoring a hot hand effect if there happens to be one. Miller and Sanjurjo conclude that there is indeed a hot hand effect, which went unnoticed by Gilovich, Vallone, and Tversky because they did not account for this bias.

Miller and Sanjurjo mathematically derive the bias in general settings, but the math is a little hairy. Here’s a brief intuition. Suppose that we have a sequence of Bernoulli random variables , and we want to estimate the probability of success for trial given that follows consecutive successes.

Suppose that a researcher is looking at this data, and she decides to randomly circle one of the outcomes that follows a success. That is, if is the set of indices following successes, she randomly chooses an index from this set such that for . If we show that < , we have shown the existence of our bias.

It is easily derivable via Bayes rule that it is enough to show . Now, the key idea is that because we are choosing randomly from trials, when , the set is 1 element larger than it would otherwise be, meaning the probability of choosing any one index is slightly smaller. That is, because is , when and every other element of is the same, increases by 1. Therefore, and < , as desired.