Friday, September 02, 2005

"Ioannidis says most published research findings are false. This is plausible in his field of medicine where it is easy to imagine that there are more than 800 false hypotheses out of 1000. In medicine, there is hardly any theory to exclude a hypothesis from being tested. Want to avoid colon cancer? Let's see if an apple a day keeps the doctor away. No? What about a serving of bananas? Let's try vitamin C and don't forget red wine. Studies in medicine also have notoriously small sample sizes. Lots of studies that make the NYTimes involve less than 50 people - that reduces the probability that you will accept a true hypothesis and raises the probability that the typical study is false.

So economics does ok on the main factors in the diagram but there are other effects which also reduce the probability the typical result is true and economics has no advantages on these - see the extension.

Sadly, things get really bad when lots of researchers are chasing the same set of hypotheses. Indeed, the larger the number of researchers the more likely the average result is to be false!"

I think this post at Marginal Revolution is very enlightening.

Consider the implications for public policy decision-making. There are many issues of public policy these days that involve scientific studies and understanding. If most published empirical studies are false, what does this mean about the ability of scientific study to inform our policy makers?

Now let's add one more consideration. Take a look at Aaron Wildavsky's book But Is It True? Wildavsky discusses what happens when science gets mixed up with politics. My short summary is that what science gets mixed up with politics, we end up with politics, and it becomes difficult to navigate through the public debate to know what science is telling us, even when the science is true and not false.