Posted
by
kdawson
on Sunday October 19, 2008 @01:21PM
from the peers-can-be-wrong-too dept.

Hugh Pickens writes "Researchers have found that the winner's curse may apply to the publication of scientific papers and that incorrect findings are more likely to end up in print than correct findings. Dr John Ioannidis bases his argument about incorrect research partly on a study of 49 papers on the effectiveness of medical interventions published in leading journals that had been cited by more than 1,000 other scientists, and his finding that, within only a few years, almost a third of the papers had been refuted by other studies. Ioannidis argues that scientific research is so difficult — the sample sizes must be big and the analysis rigorous — that most research may end up being wrong, and the 'hotter' the field, the greater the competition is, and the more likely that published research in top journals could be wrong. Another study earlier this year found that among the studies submitted to the FDA about the effectiveness of antidepressants, almost all of those with positive results were published, whereas very few of those with negative results saw print, although negative results are potentially just as informative as positive (if less exciting)."

Peer review no doubt helps to limit people who intentionally want to cause problems. Sokal's bullshit paper on quantum gravity (see The Sokal Hoax [amazon.com] ) made it into print only through a non-peer-reviewed journal. While it is disturbing to think much published scholarship is unreliable, at least it isn't necessarily malicious.

Peer review can both help and hinder - there's the reputation effect of guest authorship where having a well-known, senior, academic's name on the paper helps it through no matter how absurd the findings.

Then there are reviewers who review papers they do not have the expertise to review. And to be frank I've seen some pretty bloody ludicrous comments from supposedly expert reviewers - the sort of stuff 1st year students wouldn't make.

But I do think that the majority of researchers are dilligent and beleive in what they submit. And lets face it - if it is an emerging area and you have a neat result that either refutes someone else's grand theory or is just really novel you're going to want to see that in print. It is because we seek to replicate research that findings are later falsified. This isn't evidence that the system is broke it is pricesly how it should work. It is the work that can't be falsified that stands the test of time and contributes to our knowledge.

If there are people who think that falsifying published research is somehow a bad thing - that is shows there's a problem in research standards - the they really really need to go back to school and read some Karl Popper.

If there are people who think that falsifying published research is somehow a bad thing - that is shows there's a problem in research standards - the they really really need to go back to school and read some Karl Popper.

I think this could be phrased more carefully to make explicit the difference between theoretical and experimental research. Theoretical research being falsified is the scientific method at work. Experimental research being falsified is less cut and dried. Sometimes it's due to previously unknown effects and the result is an increase in knowledge: sometimes it's due to poor analysis of the results - failure to account for systematic errors, or statistical incompetence, and it would be better that it not be p

Indeed in my field (a sub-area of computer science) people are usually highly skeptical of any supposedly important new result in the field that was first published in one of the highly prestigious but generalist journals, like Nature or Science. These often end up being, if not outright wrong, at the very least seriously over-extending their claims or the importance of their claims, in a way that would never get them published in a specialized journal filled with an editorial board who were actually experts in the specific area in question.

This is only exacerbated by the fact that, because generalist publications know they don't have expertise in every specialized area on staff, they often ask the authors to suggest potential reviewers of their own papers. Of course, authors are likely to suggest reviewers who they think will like the paper, not the ones who would give it a grilling.

I think the interest of this particular study is not so much that a lot of science turns out to be wrong, but that a lot of the most prestigious publication venues turned out to be wrong more often.

Peer review can both help and hinder - there's the reputation effect of guest authorship where having a well-known, senior, academic's name on the paper helps it through no matter how absurd the findings.

Then there are reviewers who review papers they do not have the expertise to review. And to be frank I've seen some pretty bloody ludicrous comments from supposedly expert reviewers - the sort of stuff 1st year students wouldn't make.

So... you might say peer reviewed is not a perfect system, and if there were a perfect system, we should switch to that.

Lets see here... Divination! That is what we should do! Anyone have an ouija board?

(Kidding. I know you were explaining the problem, not saying we should abandon peer review.)

It isn't so much a problem of peer review. Peer review has limitations of course, to thoroughly review an article, one would have to repeat the experiment, which most reviewers (for good reasons) do not do. Peer review is good for what it does, give feedback to the authors of the paper, and as you said, it does have a filtering effect.

The other issue is, a lot of papers aren't really worth much at all. Nature might get their share of interesting articles, but in smaller journals, a lot of research ends up being something like, "I had this idea, and I did a small little experiment to see if it was worth anything. Maybe it is." But of course, with a small little experiment, your chances of being wrong are greater: it's just an entry-point for someone else to maybe continue research in an interesting direction. And I have done peer review, FWIW (and if you trust random guys you meet on the internet).

The problem is money. If we lived in a world where you gained power through the trust and esteem of your fellow man, no one would publish things they weren't confident in. But we don't. We live in a world where you gain power through the accumulation of leverage that allows you to dominate your fellow man despite their wishes and opinions. In this world, who gives a shit if you were wrong or not? Who gives a shit if anyone knows? It doesn't matter, as long as you got paid before they find out. Then y

I don't think there's any evidence that either economics or politics forms a good basis for encouraging right behavior. Both seem to have failed miserably, and what is clearly needed is a third system no one has apparently thought of yet. Or perhaps such a system cannot exist, it is not within the fundamental nature of man or the universe he occupies.

"I had this idea, and I did a small little experiment to see if it was worth anything. Maybe it is."

If you just had an idea, and you did a small little experiment you would get knocked back for "methodological issues". You are better having an idea, then using a big simulation output and datamining (with a boutique distance metric) to try and bluster your the readers into thinking you had done something original. Just testing ideas with experimentation is soooo 1950s. Besides, it blows the budget and doesn't attract funding.

Peer review doesn't always help. I've studied papers that have the most detailed, thoroughly tested research but end up relegated to some obscure journal because the people peer reviewing the topic don't agree with it. In one case, the pioneers of the field of siRNA lambasted a study which showed that short RNAs can enhance transcription, as well as negatively regulate it. Because this was so far outside of their model for how siRNAs work, they dismissed the work as nonsense, despite the paper showing five replicates of every experiment, and practically putting their entire work, step by step, into the supplementary materials. Papers get into good journals with far less than what I saw for this one, but peer review condemned it to obscurity.
I think peer review works, but only if the people reviewing keep an open mind and don't get piqued if the findings disagree with their own views.

How long until some researcher releases a study showing that Dr. Ioannidis' research findings are themselves wrong?

Who needs a study? Simply reading the article shows that he has fallen precisely into the trap that he is complaining about i.e. overstating his results. He forgets one very simple point: not all science is medicine/biology.

As a particle physicist I would strongly disagree with his conclusions, at least as applied to experimental particle physics. It is certainly true that some papers turn out to be wrong but this is rare and usually ends up as a 'big thing' in the field. Outside my field I'd be very surprised if the majority of physics or even chemistry papers turn out to be wrong (but I certainly not a chemist so this is just my impression).

As for medicine I can certainly see that they have a problem. Afterall how many times have we been told "don't eat X/do Y it is bad for you" only later to find out that actually it isn't half as bad as they thought and may even have benefits? Just because a lot of medical research is often flawed does not mean that all of science has the problem on the same scale.

So, Dr. Ioannidis either show us some data from chemistry, maths and physics or stop complaining that all of science has a problem on this scale. From where I stand your evidence points to a problem with bioscience/medical research only.

I would argue that this problem is not only pretty much non-existent in chemistry and physics, but that even biology, at least cell and molecular biology do not have this issue either. Typically when a biologist publishes a protein structure or sequences an organism's DNA no one shows up later and says it is wrong. In fact, it's rather large news when it does.

For example, there was a bit of a controversy over protein crystallographers recently. A person had published a paper on a protein structure that seemed to contradict all previous though functions for the protein. It turned out that they had used the wrong parameters in their phasing program. However, this doesn't happen in most to most papers, and certainly not a majority of them.

I would say that this problem is mostly specific to medical research. By its very nature, medical research is a good deal more prone to human fallibility since both subjects and researchers are human beings.

No, you are exactly right. His paper was really only intended for the field of population genetics and genetic epidemiology (his fields), where people have been using the standard p0.05 statistical cutoff as their metric for whether a given analysis is significant. So if you have 20 research groups analyze the same question (like is a mutation in gene X responsible for disease Y), according to that methodology by definition 1 of the 20 researchers will find a statistically significant result simply due to chance alone.
This is old news and journals stopped accepting papers that *only* had that statistical analysis about 3-5 years ago. Almost without exception, they now require you to show some kind of biological verification (show mutant protein X actually is defective and has reduced activity) OR you can do a replication in a completely independent sample, which is unlikely (again 1/20) to be significant by chance.
Unfortunately people have misinterpreted his paper and are applying his point to other fields like chemistry, astrophysics, or even areas of biology where it doesn't apply.

Afterall how many times have we been told "don't eat X/do Y it is bad for you" only later to find out that actually it isn't half as bad as they thought and may even have benefits?

But how often do you actually see "don't eat X/do Y it is bad for you" in a legitimate research paper? Usually I see those kinds of statements on the morning news or in newspaper headlines or in populist nutrition books (always a bad place to look for advice). Usually some new research comes out that scientists think is interesting; the mass media picks up on it and wants to dumb it down for their readers/listeners. But the next time you see such a statement in the mass media try to take a look at the ac

Afterall how many times have we been told "don't eat X/do Y it is bad for you" only later to find out that actually it isn't half as bad as they thought and may even have benefits? Just because a lot of medical research is often flawed does not mean that all of science has the problem on the same scale.

The problem here is that the popular press always report the very latest 'finding' in what is a complex field. Yet we should know that not only in medicine, but in virtually all experimental sciences, a single paper is not sufficient to establish some new profound truth.

Dr Ioannidis' largest problem is that he thinks he has identified a problem. There isn't one. This is how science is supposed to work! We publish methodologies so that the work can be replicated by other teams. Some findings survive futher scrutiny, some don't. The "hotter" the field, the less you are going to rely on the latest single study, no?

So he's found 1/3 of studies were refuted, but later work. Great, they were refuted, what's the problem? And how do we move from that to the conclusion that "most" scientific papers (even outside the hotter fields of bio-medical research) are wrong. And what about looking at outcomes? The advances of medicine even in my lifetime are astounding, this is hardly the result of a system that isn't working!

[I]f you have 66% of published results being found to be wrong you have a huge problem!

I agree. Just as well that a mere 16% were outright refuted then isn't it?:P
With another 16% shown to have weaker effects than originally reported.
(Ioannidis P A, 'Contradicted and Initially Stronger Effects...', JAMA 2005;294:218-228.) Moreover,
the study was based on 45 papers, with an intentional selection bias, they were both highly cited and claimed
high efficacy. Now such citeria might address Dr Ioannidis

Published articles are also not necessarily all supposed to be correct in every way. Some are openly speculative, MANY show an interesting result and call for further investigation. Do those count as incorrect too?

You said, "So, Dr. Ioannidis either show us some data from chemistry, maths
and physics or stop complaining that all of science has a problem on this
scale."

I'm sympathetic to the direction you are going, but I don't agree
completely.

The problem is due to being able to get extra money by
exaggerating claims. The problem is in every area of science, in my
experience. If there is no chance to get more money by exaggerating claims,
then I agree, the problem seems minimal.

In computing, claims about "Artificial Intelligence" have been
extremely exaggerated.

In physics, there are those who claim they may have found a method of
cold nuclear fusion. Search for Sonofusion [google.com],
for example, fusion that is caused by extremely intense ultrasonic sound. Some
of those claims are exaggerated, or there are omissions of the limitations.

What about Cold Fusion? What about the fabricated nanotechnology data of Schon? What about the memory of water?

The issue is the same with physics, chemistry, and all the others. A large part of the problem is that the top journals *want* papers that can make the news on Thursday; and will select papers that may have not been fully vetted, and also have a bias towards "big shots" (who have much easier time publishing any kind of trash than do young researchers).

Exceptionally rare outliers that were discovered very quickly, and these examples don't jive with the type of problem described in the article, which the GP nails when he points out how it is very concentrated in medical science.

Also, top journals don't "want" papers in the sense that they get the ones they want. Peer reviewers decide what's worth publishing, and I have yet to meet one who feels that an article should be published because it will make the evening news. Big shots do get a big advantage, but in most cases it's because they have a history of good research. Things DO slip through the cracks, but in Chemistry and Physics, those things are within error bars.

Definitely not most. All. The process of science is using theory to predict a result, carrying out an experiment to test whether that result occurs or not, and revising the theory if necessary.

We cannot ever prove that the current theory is, in fact, "correct." For all we know, there is some rule encoded into the stuff of reality that gravitation will reverse itself next Tuesday, and we can neither disprove this nor predict it. All science can offer is the minimally-complex theory that fits all currently known data.

Title is wrong. It says that the FDA is corrupt. And that published papers take around 3years to get peer reviewed where the bad ones are removed. What a blatent attack on science generally. Sure paper publishing needs to be reviewed but 'most published research is false' is an outright LIE. 'Most published research' includes all of our basis of scientific knowledge. If most of our theories on biology were wrong really we realistically wouldnt have been able to move forwards into working with genes if we didnt know what a cell did.

No question that the corporatizing of research can lead to conflicts of interest, but what are the alternatives?

You mention drugs that kill people. Well, that would be all of them - including sugar pills. In fact, if you created two groups of 1000 people and gave half of them purple sugar pills, and the other half green sugar pills, you'd find that one group vs the other would have a statistically significant increase in heart attack rates some percentage of the time. If you consider the green pills placebos then you'd erroneously prove that purple sugar pills are dangerous. Statistics with 95% confidence are wrong one time in 20...

The same applies to efficacy - especially in stuff like antidepressants where we have almost no understanding of the physiology of the brain. A psychiatrist I was chatting with speculated that it is like treating "cough" back in the 1400s - even if you had penicillian back then it wouldn't be effective against "cough" since doctors of the time had no way of evaluating what the cause of "cough" was and consequently what the appropriate treatment was. It would be a complete mystery to them why one person might miracuously recover with an antibiotic and another would not benefit at all.

I'm all for having independantly-funded clinical trials to test the safety/efficacy of pharmaceuticals, but that would cost taxpayers a fortune. Also - how do you decide what drugs are to be tested? Will we see lobbyists briging congressmen to make sure their companies products get tested before their competitors (leading to huge profits for them)? The problem with publicly funded R&D is that it politicizes research. The private R&D system combined with patents at least prioritizes research that will lead to the largest number of people using a drug/device/procedure (even if affordability becomes an issue).

I'm not actually convinced that corporate malfeasance is the reason for the recent string of drug safety problems. I think a few issues are more significant:

1. Existing drugs work moderately well - so it raises the bar for new drugs in terms of efficacy.2. Clincial trials are becoming more and more effective at detecting side effects.3. Doctors tend to assume that well-established drugs are safe. So, even a tiny increase in risk with a new drug leads doctors to avoid it (even if there isn't any strong evidence that older drugs are any better).4. The tort system ensures that doctors are better off undertreating a disease than risking a side effect. When a patient dies of cancer due to less aggressive treatment it is the cancer's fault. When a patient dies from a side-effect it is the doctor's fault or the drug companies. This neglects the risk/reward tradeoffs that all treatment decisions involve.5. All the "easy" drugs have already been discovered. This leads to increasing costs for drug R&D and less selectivity for drugs that enter trials.6. The large number of drugs in development creates enormous demand for clinical trial subjects. This leads to doctors inappropriately enrolling patients and massive costs. Doctors are basically paid by the subject so that have incentive to commit fraud, and more demand means higher fees which means more expensive drugs.

I'm not sure what the solutions to these problems are. Industry consolidation would probably help - fewer companies competing mean that clinical trial costs would drop, there would be less rush, and drug prices would rise so companies have less need to cut corners. Government funding of drug development might not hurt (from start to finish) - the patent issues go away if government just picks up the tab (with all the issues of politicized medical research).

There really are no easy answers though. Sure, people offer easy answers to the pharmaceutical problem, but they don't seem any better than the easy answers to crime, world peace, and all those other things taht people oversimplify...

At the risk of being modded down to oblivion, I am still curious to how this effects popular theories like global warming. We already has people claiming that the science is wrong and they are generally mocked and ignored because their works are published in major journals. Well, this story seems to indicate that publishing those claims will give them a larger change of it being incorrect.

Anyways, it seems that if you don't tow the line on climate change, there is no room for you anywhere. So where does this leave the accuracy of the claims in light of how common it seems that they can be wrong even when published in a respectable scientific journal. I know the IPCC looked at them, but they didn't validate any of the claims, they only looks at whether or not Humans were the cause (that was their charter and they acknowledged this in their reporting).

The idea behind this is that pharmaceutical studies are difficult and expensive to perform, so they rarely get challenged for some time, and when they do, the challengers are more often in more obscure journals: so when people go to cite statistics and findings, they don't notice the fact that what they're quoting has been invalidated. Climate change has been studied over and over again and subjected to extensive analysis by many minds for many years, particularly because so many people have questioned it. To suggest what you are saying now is a little behind the times.

But the concept isn't unique to the pharmaceutical studies. With the general attitude towards dissenters of the Faith that has grew from global warming, I don't see why it isn't true here either. I mean the IPCC used faulty temperature data in their evaluations, Al Gore exaggerated quite a bit and used outdated charts because they proved his point better and Hansen, the guy who pretty much brought Global warming into the lime light admitted to exaggerating claims and justifying it by claiming it was necessary to make people aware of the problems.

I mean it is probably even more prevalent when the data sets used in studies aren't availible to people, the temp data that was proven to be wrong was reverse engineered because Hansen refused to disclose the data. People wanting to review these studies have been mocked and denied access to the data or had the data set obfuscated to make it even more difficult to work with. There was even one instance where someone was told that he couldn't have the data because he was going to pick the work apart and the author didn't want to help him do that. Not very scientific if you ask me. There is definitely room to question what is being said. Most people in disagreement today are in contention over the causes and the purposed solutions which to date, doesn't seem to be helping out in Europe.

Nobody is arguing that the climate isn't changing, it's about the cause, the AGW premise is plausible and now even looks very likely, yet you have to consider the following the the Earth has warmed and cooled before without human intervention and establishing cause based on evolving and unproven computer modeling eliminating other causes, while we are still discovering potential causes is a bit of a stretch. Why would you think that Climatology is easier, cheaper and more certain than pharmacology and bio-m

The article is focused on largely medical studies, which are attempting to move toward curing people. Therefore, studies that cure people are interesting, studies that don't cure people aren't interesting. Global warming is different. Disproving global warming is VERY interesting, and would get published more readily than something supporting global warming.

How about the recent stories which prove that glaciers in the north have been *growing*?

A handful of glaciers are indeed growing. The vast majority are shrinking [skepticalscience.com], and they are shrinking much more than the handful of anomalous ones are growing.

A handful of unusual data points in a complex system does not prove a trend. It's as if you were to argue, "Scientists *say* that cigarette smoking will damage your health. But I know one guy who smoked and lived to a ripe old age. Therefore, these `scientific' findings are clearly the result of some politically-motivated anti-tobacco conspiracy."

If X=Y then there is no reason that X or Y can equal Z either. The process is just as transparent in other areas and just because the limit in this particular article is with pharmaceutical studies, it doesn't mean that the same garbage in garbage out rules don't apply.

In fact, the IPCC did their reports using flawed temperature data and people are still pulling up the exaggerated Mann hockey stick graph as their proof even when there is a more accurate one availible. The inconvenient truth by Al Gore did t

> I fail to see how you can draw any conclusions about the reliability of atmospheric physics papers from a study of biomedical research papers.

Biomedical research is a lot more amendable to verification and falsification, thus an argument can be made that errors are getting corrected. Global Warming is faith based, it's predictions aren't made in anything resembling a controlled scientific environment and the only way to test it's predictions is to do nothing for twenty years and see if the disasters predicted come to pass. Now consider that rerunning a medical test and the origional paper wrong will get a researcher rewarded while writing anything whatsoever questioning human caused global warming gets a researcher labeled a whore of the oil companies and the argument that the science on GW might be at least as flawed as these biomedical papers grows.

Global Warming is faith based, it's predictions aren't made in anything resembling a controlled scientific environment and the only way to test it's predictions is to do nothing for twenty years and see if the disasters predicted come to pass.

You seem to have NO idea at all about what you are saying. Global warming is based on very exact scientific studies, where "faith" is needed only in that one believes that there exists a reality around us that follows some self-consistent laws.

Now consider that rerunning a medical test and the origional paper wrong will get a researcher rewarded while writing anything whatsoever questioning human caused global warming gets a researcher labeled a whore of the oil companies and the argument that the science on GW might be at least as flawed as these biomedical papers grows.

I think it's a mistake to imply that biologists are better people or scientists than atmospheric physicists. The main difference is not in scientific integrity, ethics, or bias, but that there is one major split in the atmospheric physics field: whether or not global warming is going on. In biology, there are still divisions where either side starts adopting the "If you disagree with me you're a (insult here)," but they are many smaller ones. You don't have 30 labs entrenched on either side, you have one

I fail to see how you can draw any conclusions about the reliability of atmospheric physics papers from a study of biomedical research papers.

It's not much more difficult than extrapolating it to high-energy physics. (It doesn't.)

In other news, researchers have discovered that in the medical sciences, 1/3 == "most". When asked for a comment, a leading mathematician who refused to be named described this as "a crock of shit", and added that the old saw about it being dangerous to let a medic near a calculator remains true.

Scientific consensus in any field is reached by research, which is then published and subsequently challenged by other scientists.

At the point of initial research, the idea being tested is merely a hypothesis, otherwise known as "educated conjecture".

In the case of medicine, there are numerous uncontrollable variables which lead to a high degree of error in small case studies. (the placebo effect, environmental factors, the variable resilience of each patient, etc)

"Global warming does not fall under this. It has been researched, and retested, and re-challenged numerous times.'

On this point you are walking on loose sands. What do you mean retested? You cannot test Global Warming. You can only make observations about it and then form your own opinion based on those facts. You cannot create 2 identical planets,mess with their CO2 levels and then compare the results. All your data is based on your measurements and conclusions you draw from it. That is where the controve

That's about as smart as saying that you can't test the theory of evolution because we didn't have two identical earths from 2 million years ago so we cant verify that species evolve. Most of science works using models and any basic chemistry lab can verify that increased levels of CO2 reflects more light of certain wavelengths which lead to higher temperatures. Any more professional biology lab can verify that organisms evolve by growing bacteria for a few days.

The issues on global warming that are in dispute are numerous... we keep hearing predictions from models that have not been proven to have any predictive powers, and they keep getting more alarmist, and the increasingly ridiculous claims that every example of bad weather is a function of global warming. The issue is the "hockey stick" part of the forward feedback loop... that's the claim that because events will create forward feedback, we will hit a point in a few years where it isn't preventable, becaus

They do have predictive powers. They predicted a decade ago the trend in markedly increased storm severity and frequency.

They predicted the droughts and climate changes which are afflicting numerous regions world wide, including france, spain, and my area of the US.

The issue with global warming is we know that the current models show this forward feedback, but we KNOW that the models are incomplete

This statement is intellectually dishonest

we also know the model of the universe (including most of our scientific theory) is incomplete. This doesn't stop us from applying the current model to everything from the production of your computer to

Numerous independent climatology models (we're talking virtually every accredited university and think tank on earth with enough resources) based on hundreds of thousands of years of data from geological, oceanographic, and ice core samples, run through supercomputers millions of times.

At the risk of being modded down to oblivion, I am still curious to how this effects popular theories like global warming.

Global warming is not popular, it is down right scary. If a group of scientists disprove that our use of hydrocarbons have a significant effect on global warming, these scientists will be extremely popular and probably share a Nobel Prize.

No, I mean tow the line. And yes, I know the difference. I'm saying that if you have something to say about Global warming, it better support and further the cause or you are ostracized and labeled as an oil company whore or something.

Sadly, scientific progress tends to be made when dogmatic leaders die. 3 decades ago, scientists were worrying about a new ice age. We know from past evidence that the Earth naturally experiences "global warming" to melt ice ages and

I would think that "Publish or Perish" must contribute to a lot of crappy papers getting published. Shovel it out the door, somebody else says it's wrong, write another grant for a study to verify that, shovel that one out the door, rinse, lather, repeat...

The significant other is quitting grad school as soon as she gets her Master's in Neruoscience(she's in the PhD/Master Program). She can't stand the constant pressure of publishing nor the need constantly justify grant writing. She's not the best researcher, but the pressure is enough to drive her to not caring anymore. She'll get her consolation prize and get on with her life.

Maybe she's just not cut out for academia, though it's losing out on the great potential she has.

I would think that "Publish or Perish" must contribute to a lot of crappy papers getting published. Shovel it out the door, somebody else says it's wrong, write another grant for a study to verify that, shovel that one out the door, rinse, lather, repeat...

It does indeed. Thirty years ago an assistant professor could get tenure by publishing one good paper per year in an archival journal. Nowadays an assistant professor is expected to publish four or more journal papers per year. This leads to the well-known academic concept of the "MPU", i.e. the minimum publishable unit, or "just how many papers can I squeeze out of this one good idea?". This also leads to the backwards situation where a senior professor sitting on a Promotion & Tenure Committee may have fewer published papers (and fewer awarded research dollars) over his entire career than the assistant professor whose tenure he is voting on. Believe me when I say that the hypocrisy of this double standard is not lost on the junior faculty.

There's no doubt in my mind that the signal-to-noise ratio in archival journal papers has plummeted in the past two decades. 90% of all journal papers are superfluous, repetitive, or lacking in any significant advancement of the art, and I'll plainly admit that includes my own papers. Everyone in academia realizes what's going on, and knows it isn't good for the students or the faculty, but unfortunately that's the way the beans get counted in the academic world.

Crappy papers don't get much citations or attention in general. Also, a scientific article does not need to be crap even though it is later shown not to be correct. Even the greatest scientists have made wrong theories and connections.

Basic idea: high-profile journals want papers that are new and exciting. This means that scientists have an incentive to 1) rush their work, 2) choose fields that are popular, and 3) claim that their papers solve more than they actually do. This leads to sloppy, dishonest papers.

I'm not going to judge this paper - I haven't read it thoroughly - but to pair a title like "Why most published research findings are false" to a pretty well-known problem seems itself like an example of problem 3!

AC makes an interesting point, though any study that makes that basic an error will be corrected immediately and not cited hundreds of times, as in this paper. I suspect the errors induced are often more subtle - more a case of hopeful thinking than arithmetic ("Never believe a thing simply because you want it to be true" - Stephenson got that one right)

News flash! What works in one situation (or for one person) might not work so well in another. Too little research takes the context into account, particularly regarding any research that is human-related, and so it becomes easy to "disprove" prior findings.

Who's to say that the papers refuting this research are correct? It seems to be taken for granted that the dissenting papers are correct, and thus the original papers are wrong. It seems likely that the refuting papers may be wrong, or that there are complex situations in which both papers are correct (to differing degrees).

Several people will post about how this validates the TINY, TINY, TINY, number of scientists and LARGE number of completely uneducated "opinion" formers and MASSIVE number of people who think that "belief" is the same as fact.

What they will miss:

This article talks about how things are put out there then invalidated by SUBSEQUENT PUBLISHED RESEARCH, not about how there is a great conspiracy around something being "right" and everyone shouting down those who dare to disagree. Global Warming is something that has consistently been found to be happening and while certain bits have been revised due to subsequent research, most of that research has found that previously incorrect models were in fact too optimistic in their view.

This article doesn't strengthen your misguided, and uneducated, belief that Global Warming isn't real. When even the Republican candidate says its real then its time to let go and become part of the solution.

Yes I agree - this paper is not about "Most Published Papers" in Science. It is about published papers in the area of therapy effectiveness. Especially those where we do not have a good model. Thus of course about half should be wrong I would guess, as established by later studies. This is statistics in action. When you are looking for high correlations and selecting for the positive, you will will get false ones. As long as this paper's authors could find LATER PUBLISHED RESEARCH showing this stuff was wro

This is in a large way because the empirical method is flawed for humans. It requires complete rationality and unaffected views, two things you just aren't going to find in human beings. Even our currently taught scientific method promotes going into an experiment with expectations. The fact these experiments cost money, time, etc... (many times in the forms of grants that will be pulled if you don't show significant progress), results are often overblown or outright false. Its not that the lofty ideals

Why is it that a large portion of scientific research today is garbage. Well one very powerful reason, money. I saw this firsthand working at a major university medical center on large scale behavior research projects. The outcomes of our studies directly effected existing and experimental drugs and the drug company representatives were right there alongside the researchers at all levels of the process. Professors received "gifts" and other unofficial incentives from them regularly. I saw at least one study

I agree, but not for the same reasons (although the author's reasoning sounds plausible for big-name journals).

I've found that peer reviewers very seldom give good critiques of the methodology. Rather, most of the comments appear to be on the scope of the paper - comments of the variety "You're doing X. Smith et al. has done Y (which is tangentially and usually very weakly related to X) yet you don't mention this." I suspect that this is because most reviewers don't know enough about the research methods be

Scientific research is just that -- research. If it were as easy as doing a couple of experiments, revealing the "truth" and moving on to the next thing, we'd all be living around Alpha Centauri by now. But science is hard and therefore a lot of conclusions are naturally going to be wrong. If that weren't the case then we wouldn't even need any scientific journals -- all we'd need would be newspapers.

Remember the whole "theory of evolution" issue that the creationists keep harping on? "They call it a theory so it must not really be true?" We all know that evolution is just about as "true" as any science gets -- and yet surely there are some portions of the current body of knowledge about evolution that will one day be falsified by later research. That's not a bad thing.

Notable research that has since been thought to be flawed or insufficient: Newtonian physics. Niels Bohr's model of the atom. Gregor Mendel's research into genetics. Einstein's theory of general relativity. Koch's postulates for determining disease causation. Quantum mechanics. And so on.

We all know that evolution is just about as "true" as any science gets -- and yet surely there are some portions of the current body of knowledge about evolution that will one day be falsified by later research. That's not a bad thing.

It's not bad from a scientific perspective. But your life doesn't depend on one particular facet of "Evolution" (or Relativity, or Quantum Mechanics) being accurate.

When doctors are prescribing antidepressants to you, based on the latest medical research, the resilience of th

There is a spectrum. Some sciences are very robust and some are highly speculative. If you include mathematics as a science, then it is the most robust and needs a robust proof to get accepted as being a "truth". At the other end of the spectrum are the "wishy-washy" sciences which have no proofs, just uncontrolled observations: climatology, paleontology, social sciences, etc.

This latter group are far more prone to different interpretation by different people.

New information supplants old ideas and how does this process become sensible? I often run into manuals that are a few versions out of date on the web and they never get removed. Until somebody can come up with a way that everyone can share information in a common data base that is version controlled, the problem will persist. Wikis and Wikipedia are good ideas, but the concept needs some kind of extension so that information and its corrections are connected in some way.

Wow, headline grabbing, potential break-through scientific theories get a lot of scrutiny and citations from fellow scientists, and many of the theories fail the test of peer review. Daring theories are important for the further advances of science, but by definition most of them will fail.

It's important to identify a problem no matter what, but are any of these biases fixable? I would argue that some of them, specifically the bias toward positive results, is not fixable and is inherent to how science works.

To quote one of the articles "...negative results are potentially just as informative as positive results, if not as exciting." But negative results often require much much more verification than positive results, if they can be verified at all, and are limited in how much they can tell you. The antidepressant studies mentioned, a negative result, that the antidepressants did nothing, only tells you that in the patients tested, the doses tested did not give you a noticeable positive result. Publishing a negative result on that would have very limited conclusions. The next year, they could find that doubling the dose was actually effective, making the writeup of the earlier negative result pointless and even more trivial. A waste of time, plus then you've published saying your own product doesn't work.

Negative results get even more pointless in other fields. If someone does a mutagenesis screen for a particular defect in C. elegans, and doesn't find any mutants affecting that, it could be noteworthy, indicating that any genes affecting that process were so vital that when you took one away you didn't get a worm at all, or it could just be luck that genes affecting the process were never mutated, or the researcher didn't do it correctly, or all genes involved were redundant, or some combination. What conclusions could you draw from that? It would be a negative result that would be nigh impossible to tell anything from. Without any positive hits, you could go to the trouble of making sure you did it correctly, but you're not going to make sure every gene got hit at least once, that would be impossible.

In still other cases, a negative result is often retrospectively found to be the fault of the researcher. Who wants to publish something that is basically telling your peers how dumb you are?

There's also that it requires a lot of extra work to make sure it's a negative rather than a null result. Usually when I hit a negative result, my inclination is to see if I did it wrong by repeating the experiment if possible, if it comes up negative again I usually take a different approach, if that also gets a negative result I re-evaluate. I don't ever do all the other supporting experiments that would be needed to convince a reviewer it's a real negative result. If I use an RNAi construct to knock down a gene, and it doesn't do what I'm expecting or anything else interesting, I don't verify the gene is actually knocked down, since that's more effort that would probably be a waste. I'm definitely taking a risk that it's a real result, but it's hard to prove a negative and there's also less motivation to do so.

The limited ability to make positive conclusions about negative results also limits where they could be published. There is a journal for negative results, but a publication there is not something I personally would put on a CV.

So while it is interesting that a bias against negative results may be throwing us off, it's not very usefull knowledge, because I don't see us able to do anything about it.

If medical journals are publishing bad research, that's a problem with medicine. Real science doesn't take three years to peer review and real science is usually (iteratively) correct. Medical research is more similar to social science these days than physical science. The reliance on large samples of different people to smear out inconsistencies in data is a mark of poor understanding of the system they're studying. It's fine to use this method, but it needs to be differentiated from science in general

Alan Goldhammer, deputy vice president for regulatory affairs at the Pharmaceutical Research and Manufacturers of America, said the new study neglected to mention that industry and government had already taken steps to make clinical trial information more transparent. "This is all based on data from before 2004, and since then we've put to rest the myth that companies have anything to hide," he said.

While Ioannidis, the author of the original work that was discussed here [slashdot.org] may be correct that a large fraction of a very specific type of clinical research findings, are incorrect, there is no reason to believe (from his published work) that "most published research findings are false". Most of the ones he looked at were not reproduced, but they had well understood limitations. My papers, I can assure you, are only incorrect about 10% of the time.

Science in general isn't about "publishing what is right" but rather creating a network of accountability in the form of methods, ideas, data, procedures, etc. so others can try and reproduce and critique the results. Even if the published results are shown to be incorrect by other studies, this does not mean the system is broken. The scientific process is an iterative, self correcting, one. However, if after many years and many studies, a particular field fails to converge on an accepted baseline conclusions, there is a good chance something is wrong (you may even be doing pseudoscience).

Ioannidis wrote a previous article, titled "Why Most Published Research Findings Are False." A very provocative title, one that practically begged the scientific community to read it just to accumulate a laundry list of holes in his argument. A good marketing maneuver. It may or may not be true that most published research findings are false, but the article certainly didn't demonstrate it. Within the context of the discourse he intiated, that would have to be viewed as a kind of willful stupidity, or perhaps marketing brilliance. After all, if the same journal received ten equally well argued articles with titles like "why most research is pretty good," they would still of course prefer to publish his. This truism seques nicely into the new article (on which he is not first author), which is titled, "Why Current Publication Practices May Distort Science." Use of the word "may" is quite helpful here. Does the article live up to its title? It may not be convincing, but it would be even less so without the word "may."

I read a bunch of dental papers recently, and discovered something rather disturbing. A good 90% or more of studies for dental procedures do NOT use any control group. They all say, "we did X and got the expected result." There is no checking whether the procedure is better than other procedures or even doing nothing at all.

Something to think about next time someone you know is told they need wisdom teeth extracted or some orthodontic appliance.

I can't see what is so surprising here. Basically when you do research, you are groping in darkness - after all, it wouldn't really be worth doing if the results were already known, would it? Approaching a new problem is a bit like looking at the notorious elephant through a keyhole; different people will have different guesses as to what it is and most will be wrong, until at some point enough observations are made and you can construct a more complete picture.

When you publish scientific articles you don't claim that "THIS IS THE TRUTH" - you are merely putting forward your opinion and then somebody else comes along and says "No, because...". And even the articles with the "wrong" results are valuable, because they tell us that this particular interpretation is not the right one. It can take a lot of false turns before you find the right way through a maze, and in fact it tells us something about the generally high quality of research that we are not seeing about 90% wrong results.

In the past, people didn't have such a huge reference base so they could follow the logic, but now with computers, the Internet, and massive hard drives, papers ought to be much longer and more detailed.

While I agree with most of your post, I think there are real constraints on paper length. Mostly, these are researcher time - longer papers take longer to write, and to edit - and signal-to-noise - I need to know the basic idea of your paper *before* I decide to check your sign errors.

Of course, many papers in a high-profile journal have a more detailed, companion paper in some more specialized journal, which helps the situation - but you need to look for this paper! In some ways, it seems like we could

I need to know the basic idea of your paper *before* I decide to check your sign errors.

And you trust that the paper has been seen and deemed correct by a referee. I've been a referee for a couple of Physical Review papers and unfortunately it is indeed rather common that there is too little information in the paper to allow "checking for sign errors" as you call it. So you cannot really trust that the reviewer had enough information to vet the correctness of the paper.