Academics generally recognise that the scholarly publishing business model is flawed, the impact factor does not point to quality, and open access is a good idea. And yet, academics continue to submit their work to the same for-profit journals. Philip Moriarty looks at what is keeping academics from practicing what they preach. Despite many efforts to counter the perception, journal ‘branding’ remains exceptionally important.

I’m going to put this as bluntly as I can; it’s been niggling and nagging at me for quite a while and it’s about time I got it off my chest. When it comes to publishing research, I have to come clean: I’m a hypocrite. I spend quite some time railing about the deficiencies in the traditional publishing system, and all the while I’m bolstering that self-same system by my selection of the “appropriate” journals to target.

Despite bemoaning the statistical illiteracy of academia’s reliance on nonsensical metrics like impact factors, and despite regularly venting my spleen during talks at conferences about the too-slow evolution of academic publishing towards a more open and honest system, I nonetheless continue to contribute to the problem. (And I take little comfort in knowing that I’m not alone in this.)

A journal’s impact factor (JIF) is clearly not a good indicator of the quality of a paper published in that journal. The JIF has been skewered many, many times with some of the more memorable and important critiques coming from Stephen Curry, Dorothy Bishop, David Colquhoun, Jenny Rohn, and, most recently, this illuminating post from Stuart Cantrill. Yet its very strong influence tenaciously persists and pervades academia. I regularly receive CVs from potential postdocs where they ‘helpfully’ highlight the JIF for each of the papers in their list of publications. Indeed, some go so far as to rank their publications on the basis of the JIF.

Given that the majority of research is publicly funded, it is important to ensure that open access publication becomes the norm. This one is arguably rather more contentious and there are clear differences in the appreciation of open access (OA) publishing between disciplines, with the arts and humanities arguably being rather less welcoming of OA than the sciences. Nonetheless, the key importance of OA has laudably been recognized by Research Councils UK (RCUK) and all researchers funded by any of the seven UK research councils are mandated to make their papers available via either a green or gold OA route (with the gold OA route, seen by many as a sop to the publishing industry, often being prohibitively expensive).

With these three “axioms” in place, it now seems rather straight-forward to make a decision as to the journal(s) our research group should choose as the appropriate forum for our work. We should put aside any consideration of impact factor and aim to select those journals which eschew the traditional for-(large)-profit publishing model and provide cost-effective open access publication, right?

Indeed, we’re particularly fortunate because there’s an exemplar of open access publishing in our research area: The Beilstein Journal of Nanotechnology. Not only are papers in the Beilstein J. Nanotech free to the reader (and easy to locate and download online), but publishing there is free: no exorbitant gold OA costs nor, indeed, any type of charge to the author(s) for publication. (The Beilstein Foundation has very deep pockets and laudably shoulders all of the costs).

But take a look at our list of publications — although we indeed publish in the Beilstein J. Nanotech., the number of our papers appearing there can be counted on the fingers of (less than) one hand. So, while I espouse the three principles listed above, I hypocritically don’t practice what I preach. What’s my excuse?

In academia, journal brand is everything. I have sat in many committees, read many CVs, and participated in many discussions where candidates for a postdoctoral position, a fellowship, or other roles at various rungs of the academic career ladder have been compared. And very often, the committee members will say something along the lines of “Well, Candidate X has got much better publications than Candidate Y”…without ever having read the papers of either candidate. The judgment of quality is lazily “outsourced” to the brand-name of the journal. If it’s in a Nature journal, it’s obviously of higher quality than something published in one of those, ahem, “lesser” journals.

If, as principal investigator, I were to advise the PhD students and postdocs in the group here at Nottingham that, in line with the three principles above, they should publish all of their work in the Beilstein J. Nanotech., it would be career suicide for them. To hammer this point home, here’s the advice from one referee of a paper we recently submitted:

“I recommend re-submission of the manuscript to the Beilstein Journal of Nanotechnology, where works of similar quality can be found. The work is definitively well below the standards of [Journal Name].”

There is very clearly a well-established hierarchy here. Journal ‘branding’, and, worse, journal impact factor, remain exceptionally important in (falsely) establishing the perceived quality of a piece of research, despite many efforts to counter this perception, including, most notably, DORA. My hypocritical approach to publishing research stems directly from this perception. I know that if I want the researchers in my group to stand a chance of competing with their peers, we have to target “those” journals. The same is true for all the other PIs out there. While we all complain bitterly about the impact factor monkey on our back, we’re locked into the addiction to journal brand.

Note: This article gives the views of the author, and not the position of the LSE Impact blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Philip Moriartyis Professor of Physics in the School of Physics and Astronomy, University of Nottingham. His research interests span a number of topical themes in nanometre scale science with a particular current focus on single atom/molecule manipulation using scanning probes. My ORCID profile includes a full list of publications and grant awards. He blogs as regularly as he can (which is sometimes not particularly regularly) at Symptoms Of The Universe.

What is needed is to break the cycle:
1) systematically challenge those peers who use journal name as proxy for science quality in everyday life (colleague “Wow, that’s a Science paper”; answer: “you mean, like Arsenic DNA?”); aggressively challenge them if that happened in the context of recruitment.
2) stop perpetuating the myth that the only way to get recruited is by publishing in glam journals
3) try to publish your best papers in non-glam journals… but leave the final choice of journal to 1st author

There are at least two academic publishers who are willing to publish without too much fuss, and I mean this for full-length books as well as papers of goodly size. I recently sent my book to one of them and it was published. The only trouble is that due to the small numbers of books likely to be demanded, the asking price is very high. Today with electronic storage and printing methods, its all on a basis of print by order, individual copies are produced this way and it works!

[…] Pátý ze série příspěvků na blogu The London School of Economics and Political Sciences navazující na prosincovou konferenci Power, Acceleration and Metrics in Academic Life od Philipa Moriartyho Addicted to the brand: The hypocrisy of a publishing academic. […]

research is publicly funded..but it does not mean quality output is guaranteed- not sure what you mean by “for-profit” journals- Many of the “open access ” journals are the ones making a profit by charging the author.. At present researchers dont pay to get published..but people willing to read latest findings in good journals are expected to pay- Nothing wrong with this in my view.

Just a reaction to the statement that “people willing to read latest findings in good journals are expected to pay- Nothing wrong with this in my view.”:

This is a very west-centred perspective. Many scholars from the economically weaker countries publish in the prestigious publishing houses (Roger Chartier, legedary book history scholar, calls them “firms”) like Elsevier, Taylor and Francis Group or Springer but they cannot afford to pay for the articles in their paywalled databases. They can thus contribute to the system for free but they cannot take from it. It is thus a very unfair, unethical, discriminatory system. This needs to be continuously emphasized and exposed, otherwise it will never change.

The high cost is my main reason for not using open source journals. Yes they do give discount but generally I am not eligible for them and many funding agencies in my area expressly say they won’t cover publication cost. So until such time as it cost of publication drop and founders agree to pay at least some costs I will keep publishing were it does not cost me.

I would like to propose a strategy to continuously break the circle and stop being hypocrite. Of course at some point, it will imply doing something different from yesterday !

First, there is a causal relationship between the use of the impact factor as a way to evaluate articles and the other problems you are mentioning (and in fact with also all the others you are not mentioning!). Because we use IF to evaluate science, we desperately need journals, the process of science becomes privatized and fragmented, and in return they have full power to impose what they want: money, closed access and also their own scientific policy (which may be quite conservative and slows down the progress of science, and with random peer review). What we need to fix is the way science is evaluated. As you point it, this new evaluation must be compatible with current practices (the change must be continuous) since few people will accept to risk their career for science’s sake.

There is a solution with SJS (www.sjscience.org), a non-commercial repository that offer tools to the scientific community to build a novel community-based evaluation that no journal can reproduce. Since SJS is a repository, it can be used in parallel to current practices (e.g. as arXiv) so you can immediately start putting value in this novel mechanism while still playing the publish-or-perish sick game.

In a few words, SJS proposes to evaluate the quality of articles along two axes: 1st is its validity (the objective part of its quality) and it is established through an explicite signed consensus within the community. 2nd is its importance (the subjective part of its quality) and it is established through a particular community-wide curation mechanism. You may read details at http://www.sjscience.org/article?id=46 . In the end lazy members of committee will also have numbers that they can use as easily as the IF, but with much better scientific significance and which are no longer depending on private players such as journals, and whose internal logic strongly incentivizes open science.

Using this allows to prepare continuously an alternative to the IF which will eventually make science free (both as in free speech and free beer). The relevance of it relies on the number of users. I welcome you to have a look, discuss it (e.g. by just openly reviewing the aforementioned article) and take action if you think there is indeed hope for change !

I would like to propose a concrete strategy to continuously break the circle and stop being hypocrite. Of course at some point, it will imply doing something different from yesterday !

First, there is a causal relationship between the use of the impact factor as a way to evaluate articles and the other problems you are mentioning (and in fact with also all the others you are not mentioning!). Because we use IF to evaluate science, we desperately need journals, the process of science becomes privatized and fragmented, and in return they have full power to impose what they want: money, closed access and also their own scientific policy (which may be quite conservative and slows down the progress of science, and with random peer review). What we need to fix is the way science is evaluated. As you point it, this new evaluation must be compatible with current practices (the change must be continuous) since few people will accept to risk their career for science’s sake.

There is a solution with SJS (www.sjscience.org), a non-commercial repository that offer tools to the scientific community to build a novel community-based evaluation that no journal can reproduce. Since SJS is a repository, it can be used in parallel to current practices (e.g. as arXiv) so you can immediately start putting value in this novel mechanism while still playing the publish-or-perish sick game.

In a few words, SJS proposes to evaluate the quality of articles along two axes: 1st is its validity (the objective part of its quality) and it is established through an explicite signed consensus within the community. 2nd is its importance (the subjective part of its quality) and it is established through a particular community-wide curation mechanism. You may read details at http://www.sjscience.org/article?id=46 . In the end lazy members of committee will also have numbers that they can use as easily as the IF, but with much better scientific significance and which are no longer depending on private players such as journals, and whose internal logic strongly incentivizes open science.

Using this allows to prepare continuously an alternative to the IF which will eventually make science free (both as in free speech and free beer). The relevance of it relies on the number of users. I welcome you to have a look, discuss it (e.g. by just openly reviewing the aforementioned article) and take action if you think there is indeed hope for change !

[…] should take the lead in encouraging Open Research. The simplest way to start is to stop being what has been described as ‘a hypocrite’ and submit articles to journals which are fully Open Access. This should be […]

[…] should take the lead in encouraging Open Research. The simplest way to start is to stop being what has been described as ‘a hypocrite’ and submit articles to journals which are fully Open Access. This should be […]

How about this for breaking the cycle? Submit your paper to a high IF journal, get it accepted there, then instead of publishing it there, retract it and post to Arxiv or another open-source journal (and say “This paper was accepted to Journal of Blah Blah” and provide the reviewer reports. Eventually, people willet fed up with this “buffer”, but hey, if you were able to get past Journal of Blah Blah peer review, this paper should be good, right?

“Academics generally recognise that the scholarly publishing business model is flawed, the impact factor does not point to quality, and open access is a good idea.”

If you believe these things then you may indeed be a hypocrite if you act otherwise. Another possibility is that you don’t really believe these things but are giving to the pressure to nod along with them as they constitute the posh view in many academic circles and you are worried about negative social or career consequences if you point out for example the limitations of open access or the fact that (gasp) the average paper in Nature is pretty good. The resolution to your hypocrisy may therefore not be to change your behavior, but to recognize that your actions reflect what you really believe, and to start being honest about that with your colleagues. When you do so you will find that the opening statement of this blog “Academics generally recognise” is not really true, it’s more that “Most academics know that they are better off acting as if they recognise”.

“Academic publishing is a game: if you do not like it, what are you doing on the playing field???” This is illustrative of the attitude of a majority of researchers who publish. Most academics do not do research or attempt to publish, and progress through the ‘managerial’ route. Without exception, all (Executive) Deans insist on publishing in 3* and 4* journals… Resistance is futile and counterproductive…

Academics behave like slaves. Guess who will beat up a slave who tries to escape the most? Not the masters, but other slaves.

Interestingly, who are preaching to their students that they should seek freedom and independence? The academics. The results? They are merely preaching with their handcuffs. Their preaching is no more than another moaning of their own plight, but somehow misconceived as their own free spirit. Sadists, saddest spicies on the planet.