Singular Value Consulting

Statisticians take themselves too seriously

I suppose most people take themselves too seriously, but I’ve been thinking specifically about how statisticians take themselves too seriously.

The fundamental task of statistics is making decisions in the presence of uncertainty, and that’s hard. You have to make all kinds of simplifying assumptions and arbitrary choices to get anywhere. But after a while you lose sight of these decisions. Or you justify your decisions after the fact, making a virtue out of a necessity. After you’ve worked on a problem long enough, it’s nearly impossible to say “Of course, our whole way of thinking about this might have been wrong from the beginning.”

My concern is not so much “creative” statistics but rather uncreative statistics, rote application of established methods. Statistics is extremely conventional. But a procedure is not objective just because it is conventional. An arbitrary choice made 80 years ago is still an arbitrary choice.

I’ve taken myself too seriously at times in regard to statistical matters; it’s easy to get caught up in your model. But I’m reminded of a talk I heard one time in which the speaker listed a number of embarrassing things that people used to believe. He was not making a smug comparison of how sophisticated we are now compared to our ignorant ancestors. Instead, his point was that we too may be mistaken. He exhorted everyone to look in a mirror and say “I may be wrong. I may be very, very wrong.”

Nice idea, but in my opinion too short for a blog post.
You could either compress this idea to fit in a tweet, or (better) developp it, giving some examples of concrete cases. where statisticians take themselves too seriously.

I have never been accused of taking myself too seriously. On the contrary, in graduate school I wrote an “article” on the Rousey Expectorant Distance Intelligence test with measures like internal consistency reliability among the items. Items included expectorant distance while sitting, expectorant distance while standing, etc. I included test-retest reliability, inter-rater reliability and concurrent validity.

My advisor commented disapprovingly that “It was a light treatment of a very serious subject.”

He told me my article on comparative factor structures of the WISC-R and WISC-R Mexicano was better and I should stick to that.

When one person being wrong becomes a mass delusion, e.g. Type A personality and heart attacks, there can be much downstream mischief. DDT killing birds, so far as I know wrong, led to millions of deaths in Africa. Much of this wrongness comes from (faulty) statistics. The author, best, and the rest of us need to know how to recognize claims that are very likely wrong.

Stan, I agree with your points, except that I think mass delusion, or at least willingness to be deluded, may come first. The DDT research came out at a time when people were primed to accept its conclusions. On the other hand, people were unwilling to believe that smoking causes cancer, and it took a tremendous amount of evidence to change public opinion.

I’m hesitant to call false conclusions the result of “faulty” statistics, since that implies that we can always be correct if only we’re more careful. The best statistical analyses will reach wrong conclusions fairly often, and there’s no getting around that. See Why most published research findings are false.