Link List

Wednesday, September 7, 2011

Science run amok?

Yesterday's post on the scientist on autopilot touched on the subject of scientific publishing. We wanted to follow up on our suggestion that, at least in a technology-driven field like genetics, it might be a problem if scientists are thinking less and publishing more.

We scientists, especially those whose jobs or even salaries depend on grants, have found self-interest (at least in the short term) in a proliferation of data, technologies, and journals (and headline-blaring public media). They allow us to compete for funds in the current system as it's developed, because they give both the reality and the illusion of prolific productivity.

Several recent pieces in The Guardian have discussed the ethics and economics of modern scientific publishing. The first was a piece published on Aug 29 by George Monbiot, "Academic publishers make Murdoch look like a socialist", and it reverberated through the internet.

Who are the most ruthless capitalists in the western world? Whose monopolistic practices make Walmart look like a corner shop and Rupert Murdoch a socialist? You won't guess the answer in a month of Sundays. While there are plenty of candidates, my vote goes not to the banks, the oil companies or the health insurers, but – wait for it – to academic publishers. Theirs might sound like a fusty and insignificant sector. It is anything but. Of all corporate scams, the racket they run is most urgently in need of referral to the competition authorities.

His point is that the public pays to generate the knowledge, and then must pay to see the results. Accessing even a single article in a journal online if you don't have access through an institution can cost you $30-40 or more. And, university libraries pay thousands of dollars, sometimes tens of thousands, to subscribe to these journals.

What we see here is pure rentier capitalism: monopolising a public resource then charging exorbitant fees to use it. Another term for it is economic parasitism. To obtain the knowledge for which we have already paid, we must surrender our feu to the lairds of learning.

The 'solution', open-access journals, is too often exorbitantly expensive for authors (they can charge on the order of $2-3000), which of course is yet another way that the public pays, through grants that pay the publication fees, for science. And it has done nothing to knock the 'top-notch', or 'high-impact' journals off their monopolistic perches. The bean counters at universities still reward publication in Nature or Science or Cell over publication in the PLoS journals. And scientists still turn to the 'top-notch' journals first. Monbiot heavily criticizes publishers for taking extreme advantage of their monopoly.

The returns are astronomical: in the past financial year, for example, Elsevier's operating profit margin was 36% (£724m on revenues of £2bn). They result from a stranglehold on the market. Elsevier, Springer and Wiley, who have bought up many of their competitors, now publish 42% of journal articles.

They can't be blamed, really, since they're in business to make money. But we in the profession, not they, have allowed the system to develop in this way.

The system is being challenged, as Ben Goldacre writes in his Guardian column, Bad Science. "Digital activist" Aaron Swartz at Harvard's Centre for Ethics has apparently attempted to download millions of archived journal articles, to which he would then allow free access. He has been "accused of theft of intellectual property on a grand scale."

Totally free access may be ethically appealing, but it's not sustainable. Someone has to pay to publish science, either the producer or the consumer. Monbiot would say that the problem is that the publishers charge way too much, as evidenced by their high profit margins. Indeed, most of the work of publishing a scientific paper is done by unpaid reviewers, with the editing done, unpaid, by the author for the 'privilege' of then paying the publication charges. Open access, in which the authors pay to publish, might seem to solve the problem of cost to the consumer, but fees are high, and can compromise the integrity of the whole endeavor. And the scientists will pay the cost from their grants, not their own pockets.

The blame for this sad situation lies with the people who have imposed a publish-or-perish culture, namely research funders and senior people in universities. To have "written" 800 papers is regarded as something to boast about rather than being rather shameful. University PR departments encourage exaggerated claims, and hard-pressed authors go along with them.

The result is an equally vast quantity of poor papers and bad science, and, as he says, the only beneficiaries of the system as it now stands are publishers. The administrators and grant system can't be blamed, as we allowed this system to develop over the past 40 or so years, because it snuck in gently when money was plentiful and scientists less so. Once administrators saw the overhead and other score-counting opportunities, they naturally seized on them. And with exceedingly competitive grant funds, and an unwillingness to be realistic and fund more or less randomly, we pretend to use criteria like publication counts as evidence of quality.

Colquhoun suggests that a solution is for authors to publish their own papers on their websites and open them up for comments, though he believes this is unlikely to work, as Nature tried this a few years ago and it fell flat. So instead he proposes a quota system:

I'd suggest scientists should limit themselves to an average of two original papers a year. They should also be limited to holding one research grant at a time. Anyone who thought their work necessitated more than this would have to be scrutinised very carefully. It's well known that small research groups give better value than big ones, so that should be the rule.

And then, he says, the papers might actually be read rather than just counted.

There's no question that quality and quantity are conflated, but as long as the pressures on scientists are to bring in overhead (er, to do important research), and to publish -- anything -- rather than on quality, the system isn't going to change. But, quality could be better ensured by making scientists responsible for the claims they make in their grant applications and publications, setting limits on the number of grants a researcher can have, even a lifetime limit, reducing the amount of overhead universities are allowed to charge, or other kinds of caps and limitations.

Also, we can hardly tell, with the proliferation of results, which are going to be important (whatever that means), what of each paper is grain rather than chaff, and which are junk. Here we're not referring to fraud and fabrication. But when we haven't the time or ability to judge the details, how can we judge the actual importance? Replication is one good way, but it can these days rarely be done very convincingly. And how can we recognize the unsung gems hidden in the morass of legitimate, but incremental, if over-claimed, results?

Of course, history decades from now will make some of these judgments. It may not recognize the gems that should have been followed up, however. So there are many things to think about--and that's hard to do in a system that forces us to flood the market and hence systematically takes away the time we have to do that kind of measured reflection.

Comments

We always welcome comments, but we moderate them to reduce spam, gratuitous unkindness and so forth. Because we moderate comments, they won't appear on the blog until one of us publishes them, but we try to do that in a timely way.

We've had to make a change to the commenting page. People had told us that Blogger was eating their comments, so now, rather than embedding comment editing with the posts, it has to be done on a separate, full page. Unfortunately, the 'reply' option has disappeared so comments will just follow one another. We'll see how this goes.