Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Seriously though, I would have imagined that the papers should only get published if the results themselves were reproducible. Somehow those are skipped and the whole peer review system is in trouble.

Actually, the whole peer review system is not in trouble. See, the peer review done by (volunteer) reviewers for the magazine is just the first step. The next step comes when the article is published and the entire world gets to see the paper. The fact that the fraud was exposed in pretty short order after publication shows that, indeed, peer review does work.

Perhaps one source of misunderstanding here is that some people assume that peer review is supposed to be some kind of ultimate validation of a work. It's not, it's a basic sanity check and a validation that what the authors *claim* to have done is sufficiently interesting. It's not an endorsement by the publication venue that the work is correct and the authors are honest, because it's impractical to validate something like that without investing a lot more effort than feasible for a peer review. Reproducing a work might take months, even years.

The vaunted peer review - supposed to eliminate problems like this - failed.

Not really. Peer review is designed to catch holes in their logic or spot errors, such as if the incorrect analysis method was applied or if their scientific evidence doesn't fully support their claim. When it comes to outright fraud, a peer reviewer really has very limited means of spotting it. In exceptionally rare cases they will request that a claim be replicated by an outside researcher, but that is exceedingly rare and I don't think I've ever heard of a reviewer actually attempting to replicate research themselves as part of the peer-review process.

What normally happens is that other people in the field will read the paper and say "I don't really buy this" and attempt to replicate it themselves. If a consensus of groups can't replicate their findings, then the question becomes whether there was fraud involved or if it was just another example of "winnners curse" or maybe something unique about their study that was different from all the rest (like if they were looking at a different cell line or global population than everyone else). In no case is it really feasible for the peer-reviewer to catch outright deceptive fraud, but usually it gets spotted sooner or later. And the bigger the scientific claim, the bigger the bulls-eye becomes on your back.

The issue is not this one journal. Its a general lack of scrutiny in science itself. They are not being audited. The data is not being checked. The experiments are not being replicated.

I believe you are mistaken, or alternatively I have a different interpretation of events.

No system is perfect, however the system works more or less as follows.

Scientists do work.

They are under pressure to publish because not unreasonably the people paying for the scientists like to see their money is going somewhere useful even if the metric is far from perfect. Anyway it generates strong pressure since science is up or out an unlike many other careers, if you don't get promoted, you're eventually fired.

Work is submitted to a journal or conference. Note in HEP (high energy physics) people just shove ut up at xxx.lanl.gov and send it to a journal something as an afterthought to satisfy the previous point.

Editors filter the papers. At low journlas almost every submission gets sent out to review. At very competitive journals, the editors strongly filter papers. It helps to know an editor to get your papers past this filter. Some journals (Nature) have paid editors. Most to not.

It gets sent out to other scientist volunteers to review it. The review is really a check of reasonableness. Nobody expects the reviews to find deliberate fraud. That's not the point. They're not there to check for actual correctness, only reasonableness. In other words no one excpects them to replicate the results. The reviewers have not much time. Some less scrupulous ones farm it out to students to review. Some are mad as a sack of badgers. Some will insist you cire them more. Others may be able to check some of the results for correctness if they look suspicious (I've done this before when I believe I've spotted a glaring error I can test). In some very simple cases (e.g. Magnesium Diboride as a semiconductor) people are able to verify the results before publication.

The previous two steps repeat until the paper is accepted or rejected.

Now the important part starts.

There are two main possibilities here:1. The paper is boring. In this case it willprobably get a few citiations bulking up someone's lit review and nothing more.2. The paper is interesting in which case people will try to build on the results.

And orthogonally:A. The paper is right.B. The paper is wrong (or bad or fraudlent or etc).

(in practice there is a continuum on both)

Basically in the case of 1 it doesn't matter if the paper is right or wrong. It languishes in obscurity and the rightness or wrongness never ends up having any bearing on the state of the art and "scientific knowledge".

In the case of 2, people will try to build on it and take it further and advance the state of the art because that's where progres is happening. And that's where the scrutiny starts. The more interesting the result, the more intense the scrutiny.

For results of the magnitude of cold fusion and high temperature scrutiny, every scietist and his dog tries to take a look. The results are very quickly found out to be bad (cold fusion) or good (High Tc).

This was a big result, and what you're seeing now is the scrutiny.

The journal peer review process is merely there to provide a good place to find new results and stop the scientific world getting flooded in a mound of utter dross. It's the very first part of the filtering process.

Once a result is public is can undergo scrutiny and generally this happens in the public eye with people posting comments, other papers, commenting on the internet and so on.

Scientific results in practice undergo more scrutiny than almost anything else, and you're seeing a little bit of it right here even on slashdot.