In response to the fraud perpetrated by a cloning researcher, Science magazine …

Share this story

Science magazine was stung by their publication of not one, but two papers by the Korean scientist Dr. Woo Suk Hwang, both of which turned out to contain fraudulent data. In addition to the general soul-searching that went on within the science community, the editors of the journal undertook some very specific self-examination: was there anything about their general policy for manuscript reviews or actions in regards to these specific manuscripts that facilitated the fraud? To answer that question, they invited a panel containing former editors and current stem cell researchers to examine all the correspondence related to these publications.

The panel's report has now been released for public consumption. The report recognizes two important realities: it's impossible to identify all situations where authors submit manuscripts with intentional deception but Science, being a high-profile publication, receives such manuscripts on a regular basis. Clearly, the procedures it has in place currently can lead to embarrassments such as the Hwang situation. How can, or should, it balance these conflicting pressures?

The panel recommends that editors develop a standardized procedure that would allow certain publications to be held to a higher level of scrutiny. This would include clarification of the exact role played by all the authors and allowing both reviewers and the eventual readers of the manuscript access to as much of the primary data as they require. These criteria should be applied to any paper that appears to have significant potential for broad public interest or significant impact in controversial fields, as one of the motivations for fraud may be public or scientific recognition. It's suggested that the editors should develop a clear risk assessment procedure for identifying these papers, and that the standards be applied to other submissions at random, in order to act as a general deterrent. Other recommendations include a clearer policy on image data and sharing of reagents that will allow others to replicate the published work.

The recommendations also recognize the reality of high-profile publications, where the authors often rush to publish their findings ahead of competing research groups. Responding to this high level of review will take time, and the authors may not be willing to wait. To alleviate this problem, it's recommended that Science both notify the authors when they are placed on this high-scrutiny track, and coordinate these procedures with other leading journals, such as Nature (an editor from Nature was a member of the panel). This should prevent the authors from shopping for a journal that would allow an easier review experience. It also recommends that a mechanism for punishing authors that submit fraudulent work be established, but does not specify what this might be.

Overall, it's hard to fault these recommendations, but it's also hard to say how much they might help. Deliberate fraud would still be possible, and the panel doesn't seem to have based any recommendations on the way several earlier cases were identified: via duplication of image data. Hwang and other frauds have tripped up by simply reusing a single data point (a graph or image) to represent more than one piece of information. Specific recommendations to examine data for such irregularities or, better yet, an automated image analysis program, would at least catch one of the most common problems.

The other problem that's apparent comes from the human factor in peer review. As both a reviewer and subject of the process, it's clear that reviewers raise many more questions than they expect to see answered. As long as authors are willing to address several of the issues with new or better data, they're often able to argue that some others are not relevant or cannot be answered within the scope of a single publication. To some extent, this is viewed almost as a test of sincerity—are the authors willing to do some extra experiments to solidify their claims—but it can easily be taken advantage of by the insincere. Indeed, parts of the report suggest that Hwang answered some questions with additional data, but argued his way out of others. But the report never suggests how to achieve a reasonable balance here, one that would allow peer review to catch areas of questionable data without turning the process into a time-consuming roadblock to publication.