Scholarship 2.0 is devoted to describing and documenting the forms, facets, and features of alternative Web-based scholarly publishing philosophies and practices. The variety of old and new metrics available for assessing the impact, significance, and value of Web-based scholarship is of particular interest.

Friday, August 6, 2010

The Scientist > August 2010 > Peer Review Rejected

More than 300 years since the invention of peer review and 30 years post-Web, it’s time to act. Lest we forget, the Web was originally designed to disrupt scientific publishing, as recently noted by Michael Clarke in the Scholarly Kitchen blog.

The first major disruption has been open access (OA) publishing, a prerequisite for the new metrics, which thrive on increasing numbers of papers and data. And despite its fledgling status, OA has ushered in a second major disruption to the scientific establishment: post-publication peer review (PPPR), in a variety of experiments and formulations, pioneered by BioMedCentral and PubMedCentral.

[snip]

Thus, transparency and ongoing scrutiny by a much wider community can minimize the failures of traditional peer review (depicted on the cover of this issue), and can also bring to light innovations and discoveries that may have been ahead of the curve at the time of publication. Robust involvement by the community is required, and proposed “reputation systems” may be the key to ensure rewards for commenting and revising.

Peer review isn’t perfect— meet 5 high-impact papers that should have ended up in bigger journal

[snip]

One of the most commonly voiced criticisms of traditional peer review is that it discourages truly innovative ideas, rejecting field-changing papers while publishing ideas that fall into a status quo and the “hot” fields of the day—think RNAi, etc. Another is that it is nearly impossible to immediately spot the importance of a paper—to truly evaluate a paper, one needs months, if not years, to see the impact it has on its field.

In the following pages, we present some papers that suggest these two criticisms are correct, at least in part. These studies were published in lower-profile journals (all with current impact factors of 6 or below), suggesting they should have had less of an impact. But these papers eventually accumulated at least 1,000 citations. Many were rejected from higher-tier journals. All changed their fields forever.

Twenty years ago, David Kaplan of the Case Western Reserve University had a manuscript rejected, and with it came what he calls a “ridiculous” comment. “The comment was essentially that I should do an x-ray crystallography of the molecule before my study could be published,” he recalls, but the study was not about structure. The x-ray crystallography results, therefore, “had nothing to do with that,” he says. To him, the reviewer was making a completely unreasonable request to find an excuse to reject the paper.

Kaplan says these sorts of manuscript criticisms are a major problem with the current peer review system, particularly as it’s employed by higher-impact journals. Theoretically, peer review should “help [authors] make their manuscript better,” he says, but in reality, the cutthroat attitude that pervades the system results in ludicrous rejections for personal reasons—if the reviewer feels that the paper threatens his or her own research or contradicts his or her beliefs, for example—or simply for convenience, since top journals get too many submissions and it’s easier to just reject a paper than spend the time to improve it. Regardless of the motivation, the result is the same, and it’s a “problem,” Kaplan says, “that can very quickly become censorship.”

The introduction of PubMed in the mid-1990s revolutionized the process of finding and retrieving relevant literature. With much of the drudgery and inconvenience gone, long lists of potentially important publications could be compiled quickly and easily on any computer with an Internet connection. The parallel development of reference database management software further expanded the ability to compile and organize large numbers of abstracts, and ultimately article PDFs.

On one hand, these impressive tools greatly facilitated preparation of comprehensive literature reviews with unprecedented breadth. On the other hand, easy access to so many publications reinforced the temptation to read each paper cited less critically, and sometimes not at all. Thus was born the practice of citing numerous diverse publications to support a point of discussion, instead of citing the one or two most relevant publications with the greatest impact on a field, as if quantity and quality of citations were interchangeable and equally persuasive. [snip]

1 comment:

Thanks for bringing this to my attention. A great summary of the issues -- and it's good to see some solid coverage of the problems with peer review. Perhaps this will help to bring about change. My own assessment of peer review's problems (more from the humanities than from the sciences, but still relevant): http://www.academicevolution.com/2009/02/peer-review-is-vanity-publishing.html

Scholarship 2.0 Tweets

GMcKBlogTweets

About Me

I formerly had primary responsibilities for Collection Development, Instruction, and Reference and Research Services in Chemical and Biological Engineering; Civil, Construction, and Environmental Engineering; Industrial and Manufacturing Systems Engineering; and Mechanical Engineering; Alternative Energy; Environment Sciences with the Library of Iowa State University. I was employed from April 1987 to July 2014.
Prior to joining ISU, I served as the Museum Librarian at the Carnegie Museum of Natural History, Pittsburgh, and as an Assistant Librarian with the Library of the New York Botanical Garden in the Bronx, my hometown.
I received my Master of Science degree in Library Science from the University of Illinois-Urbana-Champaign in 1975, and my undergraduate degree in Anthropology from Lehman College of the City University of New York, The Bronx.