Main menu

The fallacy of the excluded middle — statistical philosophy edition

I happened to come across this post from 2012 and noticed a point I’d like to share again. I was discussing an article by David Cox and Deborah Mayo, in which Cox wrote:

[Bayesians’] conceptual theories are trying to do two entirely different things. One is trying to extract information from the data, while the other, personalistic theory, is trying to indicate what you should believe, with regard to information from the data and other, prior, information treated equally seriously. These are two very different things.

I replied:

Yes, but Cox is missing something important! He defines two goals: (a) Extracting information from the data. (b) A “personalistic theory” of “what you should believe.” I’m talking about something in between, which is inference for the population. I think Laplace would understand what I’m talking about here. The sample is (typically) of no interest in itself, it’s just a means to learning about the population. But my inferences about the population aren’t “personalistic”—at least, no more than the dudes at CERN are personalistic when they’re trying to learn about particle theory from cyclotron experiments, and no more than the Census and the Bureau of Labor Statistics are personalistic when they’re trying to learn about the U.S. economy from sample data.

I feel like this fallacy-of-the-excluded-middle happens a lot, where people dismiss certain statistical approaches by too restrictively defining one’s goals. There’s a wide, wide world out there between the very narrow “extract information form the data” and the very vague “indicate what you should believe.” Within that gap falls most of the statistical work I’ve ever done or plan to do: