Posts Tagged ‘publishing’

In 2006, Naturetried an experiment. The journal receives about 10,000 manuscripts a year and sends 40% of them out for traditional peer review. In the trial, the editors asked authors if they would also submit their paper for open peer review where any scientists could leave signed comments. 71 authors agreed.

The journal promoted the experiment heavily on their website, through e-mail blasts, and with targeted invitations to scholars in the field. After four months, they reviewed the results. Despite sizable web traffic to the site, 33 papers received no comments, and the most heavily commented on paper received only 10 replies.

Nor did the editors find the comments influential in their decisions whether to publish. They found that although many scientists approved of the idea of open review, very few would perform it.

Their experiment demonstrates both the promise and the pitfalls of social media. It opens up the possibility for dialogue, but it depends on self-motivated users to enrich the content.

After the editors of Infection and Immunity retracted six articles in one year, they got to thinking about the frequency of retractions.

They took a sampling of 17 journals with a range of impact factors and then created an index for each one that measured the number of retracted articles from 2001-2010 as a proportion of total articles published.

Their findings show that the higher the impact factor of the journal, the greater the frequency of retractions. They speculate that the rewards of publishing in prestigious venues may motivate researchers to engage in scientific misconduct.

Others who have looked at the data remind us that the number of retractions in all journals is vanishingly small. While it’s important to enforce ethical research practices, we may be overstating the impact of retracted papers.

Phil Davis, a postdoc at Cornell, was interested to see how rigorous the review process at the journal was. So, he used software to generate a realistic-looking but gibberish article called “Deconstructing Access Points.”

As the figure on the right shows, the article looked scientific but in reality made no sense. Still, four months after submission, Dr. Davis received word from the editor that his article had passed peer review and was accepted for publication. All he had to do was send $800.

He declined to pay, but wrote about the experiment for a scholarly publishing blog. His trick recalls the Sokal hoax where a physicist submitted a nonsense paper to a humanities journal, got it published, and revealed it later. But where Sokal was poking fun at the meaninglessness of postmodernism, Davis is pointing to the lax regulation of open access journals.

Not all online journals are this craven, but it shows that peer review is no guarantee of quality.

South Korean scientists who publish in top-flight journals like Science and Nature receive a $2800 bonus from their government. Turkish scientists who do the same can count on a bonus worth 7.5% of their salary. Other countries reward institutions for publication rates.

Researchers have now looked at whether such incentives have resulted in a greater publication success. Of the 110,870 original research articles submitted to Science over the last 10 years, first authors came from 144 different countries. 7.3% of submissions were accepted.

The study concludes that cash incentives indeed increase journal submissions, but not necessarily acceptance. Rewarding publication with career promotion leads to both greater submission and acceptance rates. Their findings suggest that monetary bonuses are enough to affect the quantity of research, but to improve quality, more symbolic prizes are needed.

first authors from 144 different countries submitted 110,870 original research articles; 7.3% of these submissions were accepted for publication, with first authors from 53 different countries

Presentation documents for the AAMC Group on Faculty Affairs conference later this week have been posted. I noticed in one of the presenter’s PowerPoint slides several humorous images taken from the web to illustrate her points.

Adding stock images to academic presentations has become so routine that we don’t think about the legality of using someone else’s content. After all, what’s the likelihood that the copyright holder for a movie poster will sue a professor who has co-opted it to liven up a talk?

A new book argues that academics should familiarize themselves with the legal concept of fair use. The authors warn that even if your use of licensed content is educational, it still must follow the guidelines for fair use. Judges have used two questions to determine if an appropriation counts as fair use:

Do you employ the content for a different use than the owner created it for?

Do you use enough of the content to achieve that new purpose?

If the answers are yes, you probably have a strong standing to claim fair use. If not, you may be violating copyright. Academics understand that all knowledge builds on existing ideas, but you still have to give credit to those who came before you.

Inside Higher Ed reports that the American Economics Association is switching its journals’ editorial process from double blind to single blind peer review. They make several arguments for the change:

In the age of search engines, reviewers can easily and accurately guess the identity of authors.

Making both reviewers and authors anonymous imposes administrative burdens on editors.

Knowing the author will allow reviewers to assess bias and potential conflicts of interest.

On the other side of the debate, a 2008 study found that when the journal Behavioral Ecology added double blind peer review, the number of female, first-authored papers increased.

When I first started reviewing double blind manuscripts, I would often look for clues to the author’s identity. It was partly from curiosity and partly so I could evaluate the submission in light of the scholar’s other work. More recently, when I review double blind articles, it’s not worth the effort to track down the author, so I make judgments based on the manuscript itself.I doubt that it has made any difference in my ultimate recommendations, but it does save time.

Between 1997 and 2009, 1,164 biomedical research articles were retracted. In over half the cases, the cause was scientific misconduct ranging from lack of IRB approval to manipulated data. Though worrisome, these articles represent a small portion of all the literature indexed in PubMed.

More concerning is that, according to a new study, many of these articles continue to be cited well after the retraction is posted. Only 6% of the subsequent citations acknowledge that the original article was flawed. The vast majority of citations occur in literature reviews. Because any search of PubMed would turn up a large “Redacted” watermark on the original article, it could be that authors are not conducting fresh searches to find citations for the literature review section.