Nobel laureate: Break free from the stifling grip of luxury journals

Randy Schekman is a professor in the Department of Molecular and Cell Biology at the University of California, Berkeley, and editor of the journal eLife. He was one of the recipients of the 2013 Nobel Prize in Physiology or Medicine.

Last week was the most memorable week of my scientific career. Accompanied by family, friends and colleagues, I was honored with the award of a Nobel Prize in an unforgettable ceremony and banquet. That same week, I also chose to express highly critical views about deficiencies I perceive in the system scientists use for publishing and rewarding scientific research, for which I was both attacked and praised.

My remarks focused on the power of certain journals, which I refer to as luxury journals, that have distorted how science and scientists operate.

Further Reading

Op-ed: It's always good to declare that you won't play after you have won.

I was not surprised by the range of opinions my comments provoked, but I have been impressed by their quantity. The evidence that the scientific community wants and needs this discussion could not be stronger. I write this to respond to some of the criticisms, to expand on some points I made, and to suggest some next steps.

It is understandable that some see hypocrisy in my criticism of a system that has served my own career well. I have published extensively in Nature,Cell, and Science. I have now, of course, won the Nobel Prize. It is therefore easy, some have said, for me to voice my concerns. But that, in some ways, is exactly the point. I am saying what many others believe but feel they cannot say, because they fear their careers might be damaged.

Yet others have spoken out. I recognize that I am not the only person to criticize luxury journals and an academic reward system that relies too much on them. I applaud those who reached this view long before me. I accept that I could have spoken out earlier in my career, but the Nobel Prize has afforded me a platform from which to speak loudly. The charge of hypocrisy would be fair were I still submitting my own research work to luxury journals. I see none in speaking out, while doing as I say.

It has also been pointed out that I have a conflict of interest. I have edited a major subscription journal (Proceedings of the National Academy of Sciences, PNAS), and now edit an open access one (eLife), both of which compete with the luxury journals in different ways. But I have long held a negative view of the role of impact factors, an imperfect measure of the importance of a journal and its content, and shared my views with the staff and editorial board members who served with me during my term as editor-in-chief of PNAS. The problems with the scientific rewards system extend beyond the competition among the journals.

I have also been clear as to the extent of this conflict. As was declared in the Guardian article, I am leading a challenge to the luxury journals as editor-in-chief of eLife. I am doing this work because I believe that journals need to be radically improved and we have the means to achieve this. Though I draw an employee’s salary, I have no wider financial stake in eLife’s success, and I have always been entirely open about my role. I believe my argument would be weaker if I were not also attempting to change the system in some ways.

I understand, too, concerns that my stance will have career implications for junior colleagues in my lab. I shared these concerns, which is why I discussed the issue with them more than two years ago, when I took on the editorship of eLife. My colleagues agreed then, as they do now, that we should be challenging the big journals, and that papers we would once have submitted to Science,Cell, and Nature should go elsewhere.

I am deeply committed to developing the careers of younger scientists I work with—that, indeed, is a major motivation for my argument. I do not want them to have to play a system where the artificial scarcity of prestige publications makes recognition and advancement such a lottery. It is gratifying that several of my lab colleagues have publicly supported me.

My purpose in avoiding luxury journals, other than being seen to walk the walk, is not necessarily to prompt others to do the same. Rather it is to prompt reflection among researchers, institutions, and funders, who are in a position to limit the poor incentives that the reliance on luxury journals has created. I want scientists and administrators, especially those involved in funding, promotion, recruitment and tenure, to think hard about the influence that publishing decisions and research assessments have. That is the way we will drive change.

One of the most important changes we need is for journals to exploit the advantages of publishing online rather than in print. Too many journals remain wedded to print, artificially limiting the number of papers they accept. This made sense when journals were constrained by page counts, but makes much less in a digital world. It makes journals more selective than they need to be, driving extreme competition for space that is good for subscription businesses but bad for science.

Intense competition for space in key journals means that the editorial process often involves multiple rounds of revision, review and resubmission, causing long delays in publication. Additional experimental data and information are often demanded by reviewers who might later, as authors, be competing for space in the same journals. Much of this data is then relegated to supplementary appendices. The experience can be highly dispiriting for researchers.

I see a solution in open-access journals. They generally cover their costs upfront, for example using a business model whereby a fee is levied for publication. This model is more suited to the digital medium: all the work that meets the editorial criteria for the journal can be published, and it can be made freely available to everyone. As high-quality science grows, so can the number of articles published. This, more than anything, is what makes eLife not like the luxury journals: it is selective, but will publish everything that meets the editors' standards. There is no picking and choosing to meet a quota. It also tries to address some of the other issues listed above, for example using a much more efficient editorial process. And when eLife receives an impact factor, it will not be promoted.

Journals, however, are only one half of this equation. We also need to address the demand for luxury journals, from researchers themselves and from the institutions that use them to judge scientific quality. We need to discuss what researchers, universities and funders can do to remove the incentives that make it rational to publish under the biggest brands. I would like to suggest four places to start.

Academics who serve a role in research assessment could shun all use of journal names and impact factors as a surrogate measure of quality. New practices and processes must be devised and shared so that we can rapidly move forward. My Berkeley colleague Michael Eisen has added an important point: we must speak up in appointment and funding committees when we hear others use journal names this way. Here we need peer pressure as much as we need peer review.

Researchers applying for positions, funding, and tenure should avoid any mention of impact factors in their applications or CVs. Article metrics might have a role to play, but narrative explanations of research significance and accomplishments would be more helpful.

Funders, universities, and other institutions should make it clear to their review committees that journal brand cannot be used as a proxy for scientific quality. If reviewers object, they should find different reviewers.

Many of us serve as editors or editorial board members of journals—and we could insist that the publishers of these journals stop promoting impact factors. Instead, the journals could emphasise the other valuable services they provide to authors and readers to promote their worth to the community.

No doubt others will come up with bigger and better ideas to move us away from the problems that we currently face. If I have helped to spark a discussion, I’m delighted. Now we have to turn our attention to action.

Promoted Comments

Sorry, but until my tenure review board, grant review boards, and academic review board take up some of his suggestions, I'd be shooting myself in the foot to ignore high impact journals.

It's all well and good for the established faculty to bemoan the situation with glamor journals, but until they institute the change (since they occupy all the senior positions), young researchers only face negative consequences for refusing to publish in journals like Science and Nature. Perhaps when/if I am granted tenure, I would be willing to support this, but right now I have a career to watch out for, and graduate students to support. These things are predicated on a system that REQUIRES high impact publications.

This is a change that will require a top-down approach, as until the criteria change, it's too much of a risk for young researchers.

Sorry, but until my tenure review board, grant review boards, and academic review board take up some of his suggestions, I'd be shooting myself in the foot to ignore high impact journals.

Absolutely,

Less than 24 hours ago I signed the paperwork for my tenure dossier to go forwards for review. My university's guidelines for dossier preparation say the following regarding academic output (emphasis is mine):

Quote:

Scholarship and creative activity are understood to be intellectual work whose significance is validated by peers and which is communicated. As specified in the Promotion and Tenure Guidelines, such work in its diverse forms is based on a high level of professional expertise; must give evidence of originality; must be documented and validated as through peer review or critique; and must be communicated in appropriate ways so as to have impact on or significance for publics beyond the University, or for the discipline itself.

<edited some stuff out>

Where not obvious, the dossier should explain how the work was validated and communicated. It is also important to know the significance of the scholarship and creative activity and the stature of the sources in which they appear. These can be commented on after each listing, and discussed in letters of evaluation from the promotion and tenure committee, the Department Chair, Head, Director, or Dean.

So, you need to tell them the impact, and one of the ways that you do this is by giving the impact factor of the journal you have published in. For 2012 Cell had an impact factor of 31.957, Nature had 38.597, Science 31.027.

In my particular field, the highest impact journal has an impact factor of 3.426

So, at a simplistic level, publishing just one paper in Nature would be viewed as having 10x the impact of my personal highest impact publication. Easy to see that it is in your interest to get into these journals if you can.

57 Reader Comments

But that's the problem: why let the luxury journals be the curators of "impact" ?

They aren't! It's not like they make up the numbers. They're high impact because they curate interesting papers that people want to cite. People want to publish there because people think they have interesting papers. Low impact journals are low impact because they don't publish interesting papers.

re: impact factor. Most-cited is not necessarily the most-read. If a journal is on the internet then actual readership (page views) can be tracked. Both are important.

Much as we can't really comment on the sound of falling trees in deserted forests, if something is read but not cited, we have no good way of knowing that it has contributed to the growth of knowledge. And in the end, that's what the process of tenure and grant applications is all about (or should be).

But of course most-read isn't most useful. The disastrous Andrew Wakefield MMR-autism paper was both heavily cited and heavily read while contributing practically no useful new knowledge whatsoever. Personally, I remain thoroughly unconvinced that the impact of a paper can be reduced meaningfully to a matter of numbers, as seductive as that notion might be.

Because extrapolating from a couple of highly publicized bad-apples is the height of scientific rigor....

Printed words are printed words, and it's easy and cheap now to send words ton computer screens. The concept of a journal as a brand-name reputation curator has become irrelevant. With the internet, research articles can be electronically organized by theme and subject area. Quality can be decided by a more honest peer review and community acceptance, with results, accolades, caveats and debates being more democratic and rapid. Editors and theme management still have their place, but without the disingenous, brand-centered phoney elitism of luxury journals.

It's exactly the cheapness of words that's the problem. Papers might be cheap, but human time isn't. Take a month and try to read every single paper that's published in your subfield in that month. Good luck finding time to do anything else.

Printed words are printed words, and it's easy and cheap now to send words ton computer screens. The concept of a journal as a brand-name reputation curator has become irrelevant. With the internet, research articles can be electronically organized by theme and subject area. Quality can be decided by a more honest peer review and community acceptance, with results, accolades, caveats and debates being more democratic and rapid. Editors and theme management still have their place, but without the disingenous, brand-centered phoney elitism of luxury journals.

It's exactly the cheapness of words that's the problem. Papers might be cheap, but human time isn't. Take a month and try to read every single paper that's published in your subfield in that month. Good luck finding time to do anything else.

Reminds me of a picture I saw a while ago subtitled "Welcome to the Internet. Have a nice dayweekmonthyear deca ... ah hell, I give can't keep up." The journals are the same way.

re: impact factor. Most-cited is not necessarily the most-read. If a journal is on the internet then actual readership (page views) can be tracked. Both are important.

Much as we can't really comment on the sound of falling trees in deserted forests, if something is read but not cited, we have no good way of knowing that it has contributed to the growth of knowledge. And in the end, that's what the process of tenure and grant applications is all about (or should be).

But of course most-read isn't most useful. The disastrous Andrew Wakefield MMR-autism paper was both heavily cited and heavily read while contributing practically no useful new knowledge whatsoever. Personally, I remain thoroughly unconvinced that the impact of a paper can be reduced meaningfully to a matter of numbers, as seductive as that notion might be.

Because extrapolating from a couple of highly publicized bad-apples is the height of scientific rigor....

In case I wasn't clear, I gave that as an example because it neatly illustrates why citations/readings are a flawed measure of quality. I wasn't striving for scientific rigour in that particular internet comment.

I agree with his opinion that subscription journals are beyond greedy. but, some, hard-copy, method of storing research data needs to be in place. Too much of it is being stored in digital form on tape or recordable CDs that eventually fade. I have several 15 year old Trevan tapes that are unreadable now. The same applies to dye based recordable disks. Good old ink on non-acid paper is still hard to beat. Some documents from over 2000 years ago still survive in readable form. See:

Sorry, but until my tenure review board, grant review boards, and academic review board take up some of his suggestions, I'd be shooting myself in the foot to ignore high impact journals.

It's all well and good for the established faculty to bemoan the situation with glamor journals, but until they institute the change (since they occupy all the senior positions), young researchers only face negative consequences for refusing to publish in journals like Science and Nature. Perhaps when/if I am granted tenure, I would be willing to support this, but right now I have a career to watch out for, and graduate students to support. These things are predicated on a system that REQUIRES high impact publications.

This is a change that will require a top-down approach, as until the criteria change, it's too much of a risk for young researchers.

I think (luxury) journals are taking a disproportionate amount of heat over this issue. Journals are responding to an academic market demand where the journal itself has value in career advancement. Grant applications are also largely judged on your publication track record, and you will be scored higher if you can publish high. In reality, the enemy is ourselves (I am an academic) for how we assess other researchers.

So, you need to tell them the impact, and one of the ways that you do this is by giving the impact factor of the journal you have published in. For 2012 Cell had an impact factor of 31.957, Nature had 38.597, Science 31.027.

In my particular field, the highest impact journal has an impact factor of 3.426

So, at a simplistic level, publishing just one paper in Nature would be viewed as having 10x the impact of my personal highest impact publication. Easy to see that it is in your interest to get into these journals if you can.

I think one of the suggestions was to not rely on the numbers, but on the actual research itself:

"Article metrics might have a role to play, but narrative explanations of research significance and accomplishments would be more helpful."

I guess he's saying what we all really know, which is that who uses the research and what it's led to is far more important than which journal it was published in. Of course, if your employer is small minded enough to say that "you must have published in journals of impact factor x" then you're screwed, but it didn't sound like that to me. Then again, if you want a job, you'll say anything and everything that will help you get it!

This is still an interesting conversation. The preponderance of push-back seems to boil down to re-stating exactly the negative influence being challenged .... presented from the position that being powerless obviates throwing one's self on the mercy of the most powerful patron available (thus ensuring a continuation of the potentially abusive power dynamic - but having enough potential benefit that the wretched still see it as their best shot).

The publications in question are currently able to select from the cream of the crop. A submission boycott by a couple of high-profile folks doesn't seem like it will impact the depth of content enough to change the undesired dynamic. The value in such acts is when the strength of one's soap box can be used to force a conversation that those who benefit as high-end commercialized gatekeepers (including here at Ars) appear to be doing everything within their power to prevent. However, an implementable solution that is fit for wider adoption needs to arise from the conversation if anything is really going to change.

Last time this came around, a participant made a pretty germane observation. If the impact a particular journal enjoys is derived in part from how often it's works are referenced, every citation measurably (if minutely) moves a hard metric either in a direction that reenforces the dynamic or reduces it. Additionally, the power of currency-creating in new citations sits completely in the hands of working scientists. These journals only control who gets into the big-leagues, they can't *force* people to cite them.

A more effective community-wide approach to changing the culture and bending the impact rankings would be for scientists to stop <b>referencing</b> work published in "luxury" journals in favor of other sources whenever possible. Make it a part of the social identity to build impact rankings across a range of quality publications. Primarily drawing from only a handful of popular resources *does* kind of seem like a lazy approach anyhow.

Sorry, but until my tenure review board, grant review boards, and academic review board take up some of his suggestions, I'd be shooting myself in the foot to ignore high impact journals.

Absolutely,

Less than 24 hours ago I signed the paperwork for my tenure dossier to go forwards for review. My university's guidelines for dossier preparation say the following regarding academic output (emphasis is mine):

Quote:

Scholarship and creative activity are understood to be intellectual work whose significance is validated by peers and which is communicated. As specified in the Promotion and Tenure Guidelines, such work in its diverse forms is based on a high level of professional expertise; must give evidence of originality; must be documented and validated as through peer review or critique; and must be communicated in appropriate ways so as to have impact on or significance for publics beyond the University, or for the discipline itself.

<edited some stuff out>

Where not obvious, the dossier should explain how the work was validated and communicated. It is also important to know the significance of the scholarship and creative activity and the stature of the sources in which they appear. These can be commented on after each listing, and discussed in letters of evaluation from the promotion and tenure committee, the Department Chair, Head, Director, or Dean.

So, you need to tell them the impact, and one of the ways that you do this is by giving the impact factor of the journal you have published in. For 2012 Cell had an impact factor of 31.957, Nature had 38.597, Science 31.027.

In my particular field, the highest impact journal has an impact factor of 3.426

So, at a simplistic level, publishing just one paper in Nature would be viewed as having 10x the impact of my personal highest impact publication. Easy to see that it is in your interest to get into these journals if you can.

<edit: oops, math is hard...>

Wow, not only did you cite numbers which are meaningless to many of us, but you also felt the need to include digits after the decimal point that nobody could consider significant. Math truly must be hard... for you.

It is in fact very easy to pick a different metric which is more meaningful than the currently used impact factors. Just by going for median number of citations per article instead of mean we would relinquish much of the hype around big journals.

It should also be recognized that papers published in bigger journals get a lot of more exposure, thereby increasing the chance to be cited independently of the quality of the work. Evaluation committees must get it into their collective thick skulls that a publication in a big name journal must have its actual impact corrected down for the overall impact of the journal.

I have to disagree almost with all his points. I don't see how high impact journals have pushed people towards easy or low risk science. Sure, there are a few cheesy articles in science or nature. But, the bar is way higher for these journals. Most papers are reviewed by the top scientists in their fields with extra scrutiny. I think the problems we are facing are mostly related to current funding environment. And we have way more serious problems than what's happening with high impact journals. I wish he had protested against the lack of a viable career path beyond postdoc in academia rather than ranting against science/nature.

I have to disagree almost with all his points. I don't see how high impact journals have pushed people towards easy or low risk science. Sure, there are a few cheesy articles in science or nature. But, the bar is way higher for these journals. Most papers are reviewed by the top scientists in their fields with extra scrutiny. I think the problems we are facing are mostly related to current funding environment. And we have way more serious problems than what's happening with high impact journals. I wish he had protested against the lack of a viable career path beyond postdoc in academia rather than ranting against science/nature.

Oh, but there IS a viable career beyond ''postdoc'' -- it's called assistant professor, associate professor, full professor, etc.

The fact that the supply of qualified aspiring individuals greatly surpasses the demand on all levels is a different matter that has to do mostly with qualification. There, reasonable criteria for achievement, which cumulative impact strives and fails to provide, are essential.

I have to disagree almost with all his points. I don't see how high impact journals have pushed people towards easy or low risk science. Sure, there are a few cheesy articles in science or nature. But, the bar is way higher for these journals. Most papers are reviewed by the top scientists in their fields with extra scrutiny. I think the problems we are facing are mostly related to current funding environment. And we have way more serious problems than what's happening with high impact journals. I wish he had protested against the lack of a viable career path beyond postdoc in academia rather than ranting against science/nature.

I think scientists are pointing fingers at the wrong places. In fact they should look at how grants are scored and awarded, and how committees decide on tenure. Nature and Science and Cell have existed long before grant funding and tenure track positions transformed into this huge mess today, and they are responding to demands of academics. Regardless of how many papers they take in for peer review, these journals will always remain competitive because of the intensive peer review (as someone who is in academia, the bar set by these journals are substantially higher). The whole issue of open access is a great one, and I commend organizations like PLoS for being disruptive. But these two systems do not have to fight it out in a Highlander-style deathmatch, and boycotting what are very good journals is not productive and conveniently avoids discussion about the important issues in academia.

As a junior researcher approaching tenure, I am unwilling to shoot myself in the foot by hiding the impact factor of the journals in which I publish. I am especially unwilling to take such a principled stance based on the recommendation of someone whose standing to make this case is based on accepting an award created by an arms dealer that goes to men ~95% of the time (http://en.wikipedia.org/wiki/List_of_fe ... _laureates).

"There is Nature in Science, and no Science in Nature". Too many Nature and Science articles are selected to be spectacular so that too often they are spectacularly wrong. Because of their standing and popular-press exposure the false ideas stay around for long times, damaging the self-correcting par of the scientific process.

As a junior researcher approaching tenure, I am unwilling to shoot myself in the foot by hiding the impact factor of the journals in which I publish. I am especially unwilling to take such a principled stance based on the recommendation of someone whose standing to make this case is based on accepting an award created by an arms dealer that goes to men ~95% of the time (http://en.wikipedia.org/wiki/List_of_fe ... _laureates).

[Edit: changed from "95% men" in the last line]

Well, there are two possibilities about your own publications -- they are either more cited than the average for the publishing journal, in which case you'll be shooting yourself in the foot for not flaunting the actual citations; or they are cited less -- in which case, you should consider yourself lucky for getting into a journal just to drag its cumulative impact down.

As a junior researcher approaching tenure, I am unwilling to shoot myself in the foot by hiding the impact factor of the journals in which I publish. I am especially unwilling to take such a principled stance based on the recommendation of someone whose standing to make this case is based on accepting an award created by an arms dealer that goes to men ~95% of the time (http://en.wikipedia.org/wiki/List_of_fe ... _laureates).

[Edit: changed from "95% men" in the last line]

Well, there are two possibilities about your own publications -- they are either more cited than the average for the publishing journal, in which case you'll be shooting yourself in the foot for not flaunting the actual citations; or they are cited less -- in which case, you should consider yourself lucky for getting into a journal just to drag its cumulative impact down.

They tend to be well cited, but citations take some time to accumulate. If I get a Science publication three months before going up for tenure, it might have a few citations (plus all the altimetric stuff like retweets), but it's probably worth highlighting that hey, it's a Science paper. The other stuff is good to point out, but leaving out anything that can help is unwise.

It may well be that other scientific journals will give up paper to cut costs. It is natural that there will be a pyramid of journals with more prestige than others.

Maybe the top of that pyramid is too narrow, and a few journals have too much power, but the fact that there are only a few journals there is not strong evidence of that. A list of cases where those journals have ignored research of greater significance than that which they published is the kind of evidence that would indicate a real problem.

The Conversation / The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.