Amongst the many "books that you absolutely have to read" for scientists is Bruno Latour's Laboratory Life (which is basically his PhD thesis). In this book, he documented the process of doing science as seen through an anthropologist's eyes. One of his insights is that a lot of what we do as professional scientists is try to accumulate credit: we want our work to be read and cited, and discoveries (like biochemical pathways) to be named after us. Whether we like it or not, this is an important part of being a career scientist: building up a reputation for doing good work, which is recognised by our peers who will then judge us for promotion, getting grants, being invited to speak at meetings etc.

And one way this manifests itself in science nowadays is in the choice of journal we try to publish in. We try to publish in journals that make us look better, i.e. those that have a better reputation, so we accumulate more credit when we list these in our talks and CVs. Thus in my area, ecology, we would rather publish in Proceeding of the Royal Society B or American Naturalist, than in Methods in Ecology and Evolution (despite the clear brilliance of the latter's editorial board) -- because of the reputation of these journals.

This, then, creates pressure on journals to enhance and maintain their reputations. We don't want to publish in journals that are seen as being dodgy, or as only publishing boring and irrelevant science, so journal editors try to publish what they perceive to be good papers, whilst rejecting those that are not so good. They want our custom (i.e. our papers), so these journals play the game of making themselves look attractive to us. One downside of this is that we scientists are eager to get our papers published in good journals so we submit to these good journals in the hope that they see our obvious brilliance.

But what of open access (which is, after all, in the title of this piece)? I think some of the issues surrounding open access and how it is evolving are impacted by journal reputation, and how this is being traded against other concerns, like making open access financially viable. Because open access journals can't charge for reading papers, they instead charge (when they do charge) the writer for publishing their paper. The monetary incentive for author-pays journals then focuses on accepting as many papers as possible -- which conflicts with the reputational incentive of only accepting "good" papers.

Good news for Open Access

Last week, some parts of the internet were full of tittering after an open access journal accepted a paper that had been generated randomly and was full of jibberish. This is the logical extreme approach to open access: accept everything, so you can pull in more money. Open access is sometimes criticised for this, and this is used as an argument for why open access is bad for science.

But I don't think this argument is valid, because it ignores the effect of reputation. The journal that accepted the randomly-generated paper is published by SCIRP, and is on Jeffrey Beall's List of Predatory, Open-Access Publishers. In other words, we know from their behaviour that they essentially act as vanity publishers for scientists. Based on the number of spam emails I receive from them, I suspect they've managed to become quite well known for this. Which means that nobody will think highly of a paper published in one of their journals, so very few scientists will want to submit a paper to them: you simply don't get any credit from your peers for publishing there -- indeed, they may even laugh at you behind your back.

So if an open access publisher wants to be successful in the long term, predatory publishing is probably not a good model: it's better to build a reputation for not publishing rubbish.

The nicer parts of Open Access

So how do non-predatory open access publishers make money? Their strategy has been to take a slightly different approach to publishing what is considered a "good" paper: it doesn't have to be important, only technically well done. This means they can accept more papers because they don't have to judge one aspect of quality, so more money comes in, and it's easier to support themselves financially.

Now, being technically good isn't good enough if you want to be a successful scientist: you have to be asking the important scientific questions (which is why Darwin's careful descriptions of barnacles are obscure). If you want to enhance your reputation, you would do better to first submit to a journal that tries to publish important science. If your paper is rejected by that journal, then you move down the reputational ladder. Because open access works better when it ignores one aspect of quality, these journals will tend to be lower in the repuational hierarchy (it's worth noting that although Public Library of Science -- PLoS -- have journals like PLoS Biology and PLoS Genetics that use importance as a criterion for acceptance, these journals are not themselves financially viable: they have to be supported by PLoS One).

This is creating a structure with high impact journals at the top and middle tiers of science publishing, with open access journals acting as buckets, catching anything that is allowed to fall through after having been deemed not important enough.

The traditional publishers have, of course, noticed all of this. A couple of years ago Springer bought out the open access publisher BioMedCentral, and now both Elsevier and Wiley are launching a range of open access journals. I have been keeping a bit more of an eye on Wiley because I am executive editor of Methods in Ecology and Evolution, which we publish through them. They now have an open access journal called Ecology and Evolution. Wiley also publishes several other journals in this subject area so they have a "Manuscript Transfer Program": if one of the other journals in the programme rejects a paper, they can suggest it be transferred to Ecology and Evolution. The reviewers' comments are also automatically transferred, so the manuscript can be judged quickly, and the process of re-submission and re-evaluation is thus sped up. One of the effects of this programme may be to encourage researchers to tie manuscripts to a single publisher if it becomes easier to shuffle a manuscript between journals in one publishing stable rather than sending it to another journal (this, presumably, is what Wiley are hoping will happen: they want to take away PLOS One's market share by making it easier to publish in a journal with a similar profile).

Re-evaluating importance?

One of the curious aspects of the open access movement has been the way it has also become attached to other campaigns in science publishing. Many of the criticisms of Elsevier we saw earlier this year had little to do with open access (i. e. so they charge too much? Well, don't pay until they reduce the price). One such campaign has been the vilification of the impact factor, and the promotion of altmetrics. I don't want to get into the full debate, but instead am mentioning that as part of this controversy.

I pointed out above that the journal we publish in is used as a signal for quality. But open access publishing worked best when one aspect of quality is not used in the decision to publish a paper. So how can open access publishers get out of the trap of publishing papers that aren't considered very important? The open access community's answer has been to suggest we redefine our measures of quality: rather than use journal title, we measure the quality of an individual paper. How do we do this? Well, after it has been published, we can measure its use: how many times it has been downloaded and cited, etc. The promise of altmetrics is that it will find a metric that will measure the quality of a paper that we can all use to boast about how wonderful our paper is rather than bragging about which journal it is published in.

Now, I'm sceptical this will happen, for a few reasons. One big reason is that altmetrics can only be measured after a delay. So, put yourself in the place of a PhD student applying for a post-doctoral position. Typically you will have one or two papers published, and these only published in the last year or so. You will have other papers in the pipeline: perhaps accepted or in press. How do you show off the quality of your work? Obviously you can't say "I've got two papers accepted by PLOS One" because it's not that impressive: any decent scientist should be able to publish in PLOS One. And any paper in press doesn't have an altmetric associated with it, so that doesn't help. Even worse, the judgement of the quality of a paper is built up over time: it takes time for people to become aware of it, and perhaps to follow it up (and the initial reaction may be because of something ephemeral, like Fig. 1c of this paper). So, even if altmetrics finds a metric for quality, it will only be effective when it's too late: it'll only measure the quality of older work. In short, I just don't see how any alternative metric will be taken up by the scientific community in their day-to-day boasting about their papers. Quite simply, we already have a way of measuring credit, even if it is imperfect. So why should we replace this metric with a number (or two or three or 17) that we don't really understand and that will only reflect some small portion of a paper's quality?

The near future for Open Access?

All of these meanderings have to be seen in the context of the flux that scientific publishing is in at the moment. It looks like open access could become entrenched at the lower end of the reputational scale because it mainly publishes papers that are not deemed particularly important in the bigger scheme of things. But I wonder what will happen in the medium term. Paying for publishing in open access is already accepted as the normal thing to do. And a lot of journals now have open access options (this includes Methods in Ecology and Evolution -- and it's half price if you're a member of the British Ecological Society!). So I wonder if, within a few years, paying for open access will become the norm, rather in the way that refereeing manuscripts without payment is considered the normal thing to do. Funder mandates should push this to happen, as will streamlining how payments are made (e.g. if the individual researcher doesn't have to take the money from their own grant). What will happen? Will the PLOS model become the norm, with the fees from the catch-all journals supporting those above them? Who knows? It'll be fun watching it happen, though.