The Texas Sharpshooter Fallacy

It’s logical fallacy time again, here in Skeptictionary corner, just because I feel like it, and because these are usually fairly light on the research, which is useful when I’ve got a headache. These fallacies tend to be the sort of thing your squishy primate brain isn’t innately equipped to handle intelligently, but which you can comfortably deal with using just a dash of good sense.

First, I’m going to blow your mind with something hugely improbable. I’m currently shuffling a deck of cards in between keystrokes, and once it’s more or less randomised (or close enough for our purposes, anyway) I’m going to deal out the top few and see what we get. Watch closely now: I am about to defy some astounding odds. Nothing up my sleeve…

Seven of clubs. Eight of hearts. Jack of clubs. Nine of diamonds. Two of diamonds. Queen of hearts. Jack of spades. And, miracle of miracles, the six of hearts.

I could go on, but that’s impressive enough right there, isn’t it? Do you know what the odds were against my drawing that exact sequence from a shuffled deck? It’s up into trillions to one against, literally. (And I mean “literally” literally there, not figuratively, or non-literally, as it’s often used.) But I laughed in the face of improbability, and went ahead and did it anyway, first time. I barely even broke a sweat.

Now, I admit, it may have been a bit more impressive if I’d predicted or aimed for this particular sequence before making the draw, but get off my back, I’m making a rhetorical point here.

Obviously, any sequence of cards in a deck is equally unlikely to appear at random – there are about 80,000 billion billion billion billion billion billion billion different ways of arranging them, after all, so that’s pretty stiff odds against any one particular combination – but somehow arranging them in such a sequence is never a hard thing to do. The point, of course, is that they have to be arranged somehow, and you’re not flying in the face of probability if it’s only afterwards that you point out how unlikely a result it is.

By the same reasoning, some lucky sod won the Lotto jackpot again last week. And we’re supposed to believe s/he just pulled off a millions-to-one shot by chance? I call shenanigans.

This is all fairly easily understood, and the story that gives the fallacy its name – a gunman shoots a hole through a wall, then carefully draws a target around wherever the bullet happened to go – is obviously comical. But it can be a surprisingly insidious mistake.

It’s bad practice to base a hypothesis on certain information, and then point to that same information as evidence that your hypothesis is right. If you flip a coin a hundred times, the odds of the heads/tails split being exactly 50/50 is actually pretty slim; chances are that you’d get a few more of one than the other. So, some sort of uneven leaning is more likely than not, even with a completely fair coin; obviously you can’t start claiming that it’s biased just because you got a few more heads than tails.

If it only came up tails twice in 100 trials, then that may be significant – even if you were just idly flipping coins to waste a boring afternoon, rather than testing a hypothesis (For Science!), you were still probably doing so under the assumption that heads and tails are both equally likely, so this would definitely look weird. But ideally a result like this should be repeated and verified, to make sure the original data hasn’t been cherry-picked, or that this exact fallacy isn’t making you read too much into a simple coincidence.

A study on whether people with some star signs are more likely to have road accidents than others is not necessarily justified in assuming any kind of correlation; if no hypothesis was proposed before the data was collected, then the differences observed could be nothing more than expected random fluctuations. It’d be weird if accident rates were split entirely evenly twelve ways. And apparently the most dangerous sign, Aries, was responsible for “nearly 9% of all road accidents”. When you consider that one-twelfth is about 8.33% anyway, this doesn’t sound so impressive. If it was a big enough change, observed in a big enough sample size, then maybe it could be significant if there’d been a hypothesis to begin with, but going “Ooh, that number’s bigger than that other number” doesn’t qualify as a statistical analysis.

(The explanations attributed to astrologers, for why these results would be expected, seem entirely post-hoc and deeply selective; Aries being fiery and headstrong makes sense, but “prudent and over-cautious” Capricorns were the 6th most dangerous, “protective” Cancer was 4th, and “adventurous and careless” Sagittarius were the safest of the lot.)

In a more subtle variation of the origin story, our sharpshooter unleashes a tirade of multiple rounds against the poor unsuspecting wall (who was only standing their doing its job, after all), and then draws his target around a cluster of several holes, which just happen to be closer together than most of the others, and which look like a fairly convincing series of hits (particularly with the target drawn in after the fact). But although it can sometimes look weird, this kind of clustering is only to be expected when the data is flying all over the place.

People tend to have a clear idea of what randomness ought to look like. It should bounce crazily and unpredictably all over the place, with heads coming up as often as tails (or whatever the variables are), and recognisable patterns definitely aren’t allowed. A string of heads amidst an assorted mass looks “less random”, and might stand out. But there are quite a number of “non-random” patterns that could be found in a series of coin-flips – a run of heads, a run of tails, a run alternating between the two, consecutive short runs of each, and so on – and the odds of some of these turning up in a long enough series might not be that long. It’s the monkeys/typewriter/Hamlet principle on a smaller scale. Spend a few hours tossing a coin, eventually you’ll get ten heads in a row, and probably stumble across a few other interesting patterns on the way.

If you watch random noise long enough, you will see patterns appearing. If you only point them out afterwards, though, it’s a lot less likely to be impressive or interesting. The results of the Global Consciousness Project would seem to be a good example of this, but my lengthy digression about them has been cut, and I’ll try and put that in a separate article soon.

Like this:

Related

3 Responses

I’m not sure if it’s the same fallacy at work (I suspect that it isn’t, for various reasons), but this reminds me a lot of what Creationists do with ‘statistics’ – “The odds of [x] evolving are billions to one!’, they cry, usually with no way at all of backing up a number like that.

There should be some sort of law against using satistics inappropriately or if you don’t know what you’re talking about. (And I’m more than willing to admit that this would preclude myself and, sadly, Richard Dawkins from tossing around arguments based on ‘unlikeliness’.)