My colleague found an article in which the author claimed to be able to predict the winner [of the Grand National]. Nothing new about that, but this one was allegedly written by a mathematician. His analysis of the past 176 races brought him to the conclusion that the winning horse would have a name comprising one word and it would be either eight or 10 letters long starting with one of four letters. He cited four criteria and he gave marks (1-4) for each. The horse with the highest score (13/16) would, he claimed, be this year’s winner. It came in 13th. The one he placed second failed to complete the course after falling at the 12th fence. His third-place pick did, somewhat remarkably, come in third. However, five out of the first six horses all had names comprising two words, not one.

I strongly suspect that if the author had not claimed to be a mathematician no one would have even considered taking it seriously.

This also makes the important part that statistical trolling often produces garbage. With 176 data points and with billions of possible ways to be related, there are bound to be some coincidental ways in which the PAST data are related which will not predict well in the FUTURE. That’s why any applied statistician with a brain uses holdout samples – holding part of the data out of the analysis in order to test whether the relationships seen in the rest of the data still hold.

Expert opinion?

There is also this general advice: Just because an expert says it, doesn’t mean it’s an expert opinion. Experts make mistakes. And, in particular, sometimes others are hesitant to point out what they think is a mistake, because the other person is an expert. (The same applies to corporate executives who don’t take criticism well. Eventually, there’s nobody around to give them anything but a fawning agreement.)

In particular, an expert opinion often depends on certain assumptions about the nature of the problem. if we make assumptions about what the problem is about, this often allows us to use theory to solve the problem. For example, if you go to an emergency room with a badly swollen risk after a fall, the ER is likely to assume that you have a broken wrist. If that assumption is right, they have a whole theory for how to treat this. (But not also that they will confirm this theory by taking X-rays!)

Assumptions are often simplifications of the situation and may not be fully correct – hence the joke about the economist beginning a paper with “Assume a spherical cow…” If the expert gets the assumptions wrong in a serious way, the conclusion can be based on impeccable logic but still fail.

An expert outside their own field (such as the mathematician in Crocker’s example) is particularly likely to make erroneous assumptions (e.g. that there would be any more than a coincidental relationship between the name of the horse and its probability of winning).

The expert paradox

But this leads us to an interesting paradox:

1. An expert in the field (e.g. an applied statistician with long experience in a particular area) is more likely to make correct assumptions based on their experience in the area.

BUT

2. Experts in the field are also prone to conservatism (“we’ve always done it this way”) and may be less open to new approaches that are better in some way.

The truly new ideas often come from the fringes of the field, people who perhaps know just enough to be dangerous when they have that initial, new, big idea.