Networked blogs

Science Magazine rejects data, publishes anecdote

Yesterday, Science Magazine published a news story (not a peer-reviewed paper) by Gonzo-Scientist John Bohannon on a sting operation in which a journalist submitted a bogus manuscript to 304 open access journals (observe that no toll access control group was used). Science Magazine reports that 157 journals accepted and 98 rejected the manuscript. No words on any control groups or other data that would indicate what the average acceptance rate for bogus manuscripts might be in general.

As Michael Eisen points out, this story is merely the pot calling the kettle black, when Science Magazine is replete with bogus articles (such as that on #arseniclife, for instance) and the magazine has one of the highest retraction rates of the entire industry. Which brings me to the main point of this post: it should come as no surprise that Science Magazine publishes a news story on an ill-conducted sting operation, an anecdote without proper controls – that’s what glamor magazines like Science, Cell or Nature do. The data that we have on this fact are quite unequivocal: hi-ranking journals like these retract many more papers than any other journal and a large fraction of these are retracted because of fraud. There is not even a single quality-related metric in the literature that would confidently express any advantage, quality-wise, of hi-ranking journals over others. However, there are a number of metrics which suggest that, in fact, the quality and reliability of the science published in these GlamMagz is actually below average.

To make things worse, when we submitted this data to Science Magazine, they rejected it with the remark that “we feel that the scope and focus of your paper make it more appropriate for a more specialized journal”. Obviously, Science Magazine values anecdotes more than actual data. No surprise their retraction rate is going through the roof: rejecting data that make them look bad and publish anecdotes that make them look good.

In your paper, how did you control for visibility, readership, and interest when estimating rates of fraud and retractions? If a paper is all over the front page of the NY Times, and read by tens of thousands of scientists because it is in Science, surely that would imply a much higher chance of detecting fraud and error? Versus a paper that is never read, never cited, and never makes it into the news. From my standpoint it is not worth my time digging into a paper to see if it is fraudulent or wrong, if nobody reads it anyway and the wrong idea dies a sad and lonely death.

I say this as someone who has spent considerable time rebutting ideas that are wrong, but focusing solely on those ideas which have the greatest persistence because they are published in Science and Nature. I see other papers in low impact factor journals that are wrong too, but given how little impact a rebuttal has on the staying power of the original paper, there is not much point sending rebuttals in to low impact journals. e.g. http://www.esajournals.org/doi/pdf/10.1890/ES10-00142.1

It’s just the few top journals that have an inordinate frequency of retractions. So people do care about retractions in other journals, in absolute numbers much, much more than in the ‘hi’ journals. This argues against the idea that people don’t care about articles in ‘LO-IF’ journals.

Moreover, if visibility had such a large effect, as you suggest, why is the correlations with citations so abysmally low? Surely, it’s much easier to cite a ‘HI-IF’ paper to raise the value of one’s own study, than it is to retract it? So clearly, if visibility was the main factor, the correlation with citations must be much higher than that for retractions. However, the opposite is the case.