Nobel laureate: Break free from the stifling grip of luxury journals

Randy Schekman is a professor in the Department of Molecular and Cell Biology at the University of California, Berkeley, and editor of the journal eLife. He was one of the recipients of the 2013 Nobel Prize in Physiology or Medicine.

Last week was the most memorable week of my scientific career. Accompanied by family, friends and colleagues, I was honored with the award of a Nobel Prize in an unforgettable ceremony and banquet. That same week, I also chose to express highly critical views about deficiencies I perceive in the system scientists use for publishing and rewarding scientific research, for which I was both attacked and praised.

My remarks focused on the power of certain journals, which I refer to as luxury journals, that have distorted how science and scientists operate.

Further Reading

Op-ed: It's always good to declare that you won't play after you have won.

I was not surprised by the range of opinions my comments provoked, but I have been impressed by their quantity. The evidence that the scientific community wants and needs this discussion could not be stronger. I write this to respond to some of the criticisms, to expand on some points I made, and to suggest some next steps.

It is understandable that some see hypocrisy in my criticism of a system that has served my own career well. I have published extensively in Nature,Cell, and Science. I have now, of course, won the Nobel Prize. It is therefore easy, some have said, for me to voice my concerns. But that, in some ways, is exactly the point. I am saying what many others believe but feel they cannot say, because they fear their careers might be damaged.

Yet others have spoken out. I recognize that I am not the only person to criticize luxury journals and an academic reward system that relies too much on them. I applaud those who reached this view long before me. I accept that I could have spoken out earlier in my career, but the Nobel Prize has afforded me a platform from which to speak loudly. The charge of hypocrisy would be fair were I still submitting my own research work to luxury journals. I see none in speaking out, while doing as I say.

It has also been pointed out that I have a conflict of interest. I have edited a major subscription journal (Proceedings of the National Academy of Sciences, PNAS), and now edit an open access one (eLife), both of which compete with the luxury journals in different ways. But I have long held a negative view of the role of impact factors, an imperfect measure of the importance of a journal and its content, and shared my views with the staff and editorial board members who served with me during my term as editor-in-chief of PNAS. The problems with the scientific rewards system extend beyond the competition among the journals.

I have also been clear as to the extent of this conflict. As was declared in the Guardian article, I am leading a challenge to the luxury journals as editor-in-chief of eLife. I am doing this work because I believe that journals need to be radically improved and we have the means to achieve this. Though I draw an employee’s salary, I have no wider financial stake in eLife’s success, and I have always been entirely open about my role. I believe my argument would be weaker if I were not also attempting to change the system in some ways.

I understand, too, concerns that my stance will have career implications for junior colleagues in my lab. I shared these concerns, which is why I discussed the issue with them more than two years ago, when I took on the editorship of eLife. My colleagues agreed then, as they do now, that we should be challenging the big journals, and that papers we would once have submitted to Science,Cell, and Nature should go elsewhere.

I am deeply committed to developing the careers of younger scientists I work with—that, indeed, is a major motivation for my argument. I do not want them to have to play a system where the artificial scarcity of prestige publications makes recognition and advancement such a lottery. It is gratifying that several of my lab colleagues have publicly supported me.

My purpose in avoiding luxury journals, other than being seen to walk the walk, is not necessarily to prompt others to do the same. Rather it is to prompt reflection among researchers, institutions, and funders, who are in a position to limit the poor incentives that the reliance on luxury journals has created. I want scientists and administrators, especially those involved in funding, promotion, recruitment and tenure, to think hard about the influence that publishing decisions and research assessments have. That is the way we will drive change.

One of the most important changes we need is for journals to exploit the advantages of publishing online rather than in print. Too many journals remain wedded to print, artificially limiting the number of papers they accept. This made sense when journals were constrained by page counts, but makes much less in a digital world. It makes journals more selective than they need to be, driving extreme competition for space that is good for subscription businesses but bad for science.

Intense competition for space in key journals means that the editorial process often involves multiple rounds of revision, review and resubmission, causing long delays in publication. Additional experimental data and information are often demanded by reviewers who might later, as authors, be competing for space in the same journals. Much of this data is then relegated to supplementary appendices. The experience can be highly dispiriting for researchers.

I see a solution in open-access journals. They generally cover their costs upfront, for example using a business model whereby a fee is levied for publication. This model is more suited to the digital medium: all the work that meets the editorial criteria for the journal can be published, and it can be made freely available to everyone. As high-quality science grows, so can the number of articles published. This, more than anything, is what makes eLife not like the luxury journals: it is selective, but will publish everything that meets the editors' standards. There is no picking and choosing to meet a quota. It also tries to address some of the other issues listed above, for example using a much more efficient editorial process. And when eLife receives an impact factor, it will not be promoted.

Journals, however, are only one half of this equation. We also need to address the demand for luxury journals, from researchers themselves and from the institutions that use them to judge scientific quality. We need to discuss what researchers, universities and funders can do to remove the incentives that make it rational to publish under the biggest brands. I would like to suggest four places to start.

Academics who serve a role in research assessment could shun all use of journal names and impact factors as a surrogate measure of quality. New practices and processes must be devised and shared so that we can rapidly move forward. My Berkeley colleague Michael Eisen has added an important point: we must speak up in appointment and funding committees when we hear others use journal names this way. Here we need peer pressure as much as we need peer review.

Researchers applying for positions, funding, and tenure should avoid any mention of impact factors in their applications or CVs. Article metrics might have a role to play, but narrative explanations of research significance and accomplishments would be more helpful.

Funders, universities, and other institutions should make it clear to their review committees that journal brand cannot be used as a proxy for scientific quality. If reviewers object, they should find different reviewers.

Many of us serve as editors or editorial board members of journals—and we could insist that the publishers of these journals stop promoting impact factors. Instead, the journals could emphasise the other valuable services they provide to authors and readers to promote their worth to the community.

No doubt others will come up with bigger and better ideas to move us away from the problems that we currently face. If I have helped to spark a discussion, I’m delighted. Now we have to turn our attention to action.

Promoted Comments

Sorry, but until my tenure review board, grant review boards, and academic review board take up some of his suggestions, I'd be shooting myself in the foot to ignore high impact journals.

It's all well and good for the established faculty to bemoan the situation with glamor journals, but until they institute the change (since they occupy all the senior positions), young researchers only face negative consequences for refusing to publish in journals like Science and Nature. Perhaps when/if I am granted tenure, I would be willing to support this, but right now I have a career to watch out for, and graduate students to support. These things are predicated on a system that REQUIRES high impact publications.

This is a change that will require a top-down approach, as until the criteria change, it's too much of a risk for young researchers.

Sorry, but until my tenure review board, grant review boards, and academic review board take up some of his suggestions, I'd be shooting myself in the foot to ignore high impact journals.

Absolutely,

Less than 24 hours ago I signed the paperwork for my tenure dossier to go forwards for review. My university's guidelines for dossier preparation say the following regarding academic output (emphasis is mine):

Quote:

Scholarship and creative activity are understood to be intellectual work whose significance is validated by peers and which is communicated. As specified in the Promotion and Tenure Guidelines, such work in its diverse forms is based on a high level of professional expertise; must give evidence of originality; must be documented and validated as through peer review or critique; and must be communicated in appropriate ways so as to have impact on or significance for publics beyond the University, or for the discipline itself.

<edited some stuff out>

Where not obvious, the dossier should explain how the work was validated and communicated. It is also important to know the significance of the scholarship and creative activity and the stature of the sources in which they appear. These can be commented on after each listing, and discussed in letters of evaluation from the promotion and tenure committee, the Department Chair, Head, Director, or Dean.

So, you need to tell them the impact, and one of the ways that you do this is by giving the impact factor of the journal you have published in. For 2012 Cell had an impact factor of 31.957, Nature had 38.597, Science 31.027.

In my particular field, the highest impact journal has an impact factor of 3.426

So, at a simplistic level, publishing just one paper in Nature would be viewed as having 10x the impact of my personal highest impact publication. Easy to see that it is in your interest to get into these journals if you can.

I want to thank Dr. Sheckman for taking this stance, even when it brings accusations of hypocrisy and self-interest. I personally believe that he is absolutely correct on the over-reliance of science on "luxury" journals.

General-interest, high-profile scientific journals have a vital place in the scientific community as a conduit for globally interesting ideas. We, as a community, have somehow equated them with career success.

I'm a faculty member and administrator deeply involved in the promotion and tenure process at a major academic medical center. We started deemphasizing impact factors two years ago, and actually provide our promotion and tenure committee with sets of bibliometrics that do not contain journal names or impact factors precisely for these reasons.

We still look at h-index and citations, but we emphasize the secondary role of bibliometrics when compared to the quality of the work as determined by peer review.

For Jack: An impact factor is supposed to be a measure of the importance and/or quality of a journal. It's determined by dividing the total number of citations articles in that journal receive over a period of time (usually two years) by the number of articles published in that period.

In other words, average citations received per article over a period of time.

A citation is usually considered a kind of endorsement. The thinking is that the more citations, the more other scientists are reading the work and incorporating it into their own. Therefore journals whose articles get lots of citations -> important journals.

Anyone with citation information can compute them, but the commonly-used impact factors table comes from the Journal Citation Reports, from Thomson/Web of Science.

However, on general principal, I think efforts like Dr. Sheckman's are useful: even if he's unsuccessful at completely overturning the "luxury journal model", shaking the system a bit usually seems to help. Knock some of the cruft loose, get people looking at new (or at least different) ways to do things, etc.

I do think it would help to make editorial selection less of a factor. If an editor disagrees with a paper, under the current regime, it seems like it wouldn't be all that hard to keep the paper from being published in that particular journal, and having to fall back to a secondary (less impressive) journal would reduce the exposure and perceived validity of the paper. Going to a model not limited by page count seems like it would reduce selection (at least somewhat) to findings of fact, rather than "I don't like this technically-strong paper, so I'll print this more personally acceptable paper instead".

I feel we as a society should take this stance in regards to all the gatekeepers in all the different mediums. Allowing content makers to dictate which content is relevant and why, diverts us from any real conversations or progress as a society.

One area not addressed by the editorial is publicity. The luxury journals have a near (but diminishing) monopoly on the scientific news cycle. I was lucky enough to have a two-page perspective published in Science, and it got far more press coverage than any of my actual papers that involved long, painstaking work resulting in relevant and applicable findings. With the Science perspective, various online publications and even our funding source were emailing and calling for quotes and putting us in their publicity materials. Of course, publicity is less important than tenure and hiring decisions, but it still plays an important role in improving morale and validating all the hard work we put in to reach that point.

My perspective in Science is also my second most highly cited article, and I am sure that it will quickly outpace my current citation champion-which is a proper research article. Even if we discard the impact factor and journal name from career decisions, the bully pulpit of the luxury journals is still powerful enough to skew the bibliometrics that Dr. Shekman and others still believe in.

I kind of get the feeling that making these journals so difficult to get into has made the scientific community lazy to an extent. It's much easier to just read a prestigious journal and say "Oh well it was in the ABC Science Journal, so you KNOW it has to be good." In a system where there's not such a high barrier to entry there's going to be a lot more material. Instead of just reading the top two or top three journals, teams may have to peruse a staggering online article store (or two).

Great praise to Randy Schekman for his powerful stance, and I agree with him wholeheartedly. Such a system of relying on a few "good" journals (and deeming them such because of the difficulty of getting published) definitely could definitely have a chilling effect on very important but perhaps less popular science. I just can't help but think he's got a bit of an uphill battle... you always do when your goal is to change the status quo and thereby an entire industry, even if the change is for the better.

As I librarian, I am regularly confronted with the absurdity that universities pay professors to do research and then pay exorbitant fees to buy access to the results.

I've also found it strange that most professors list the impact factors of the journals they have published in instead of the number of citations of their own articles. Shouldn't we care more about the impact of your research than that of other authors in the same publication?

I kind of get the feeling that making these journals so difficult to get into has made the scientific community lazy to an extent. It's much easier to just read a prestigious journal and say "Oh well it was in the ABC Science Journal, so you KNOW it has to be good." In a system where there's not such a high barrier to entry there's going to be a lot more material. Instead of just reading the top two or top three journals, teams may have to peruse a staggering online article store (or two).

Its actually quite the opposite. Most of us are very happy to see something from our field in a big journal, but we usually spend most of the time thinking, "WTF, that's already well known. . . " rather than, "gee that's amazing and useful because its in ABC journal".

The area where we get lazy is in citing the articles in our papers and proposals. Having a luxury journal article explicitly say, " someone should look into X in the future to expand on our study. . ." is solid gold when writing a proposal and can sometimes replace a more well thought-out analysis of the problem and scope we are trying to solve. (Normally, the researcher writing the proposal will have thought deeply about the project, but its quicker and easier to include the quote from the luxury journal than to concisely and effectively expound on your own ideas)

EDIT - For most of us, there is absolutely no way you could function without reading the specialty journals in your field. The coverage in the luxury journals is far to sporadic to have any idea what is really going on

As I librarian, I am regularly confronted with the absurdity that universities pay professors to do research and then pay exorbitant fees to buy access to the results.

I've also found it strange that most professors list the impact factors of the journals they have published in instead of the number of citations of their own articles. Shouldn't we care more about the impact of your research than that of other authors in the same publication?

I don't want to defend this practice, but explain it.

Citations take many years to accumulate. Impact factors are instantly available. That's why they are used.

As I librarian, I am regularly confronted with the absurdity that universities pay professors to do research and then pay exorbitant fees to buy access to the results.

I've also found it strange that most professors list the impact factors of the journals they have published in instead of the number of citations of their own articles. Shouldn't we care more about the impact of your research than that of other authors in the same publication?

I'm just a layman, but sometimes I like to read the results of scientific studies that might be relevant to me. Unfortunately, I can't because I'm not subscribed to the expensive journals they're published in.

I'm just a layman, but sometimes I like to read the results of scientific studies that might be relevant to me. Unfortunately, I can't because I'm not subscribed to the expensive journals they're published in.

But you can, though.

Abstracts are freely available, and if you really want to read a paper, send a polite email to the corresponding or lead author. More often than not they'll send you the pdf.

I'm just a layman, but sometimes I like to read the results of scientific studies that might be relevant to me. Unfortunately, I can't because I'm not subscribed to the expensive journals they're published in.

In the vast majority of cases, you can email the corresponding author, listed on all publications, and they will gladly send you a copy. In fact, the luxury journals have some of the least restrictive copyright rules for the authors.

That said, I look forward to the day when you simply need to click on the download link. . .

I'm just a layman, but sometimes I like to read the results of scientific studies that might be relevant to me. Unfortunately, I can't because I'm not subscribed to the expensive journals they're published in.

THIS^^^^I'm a scientifically trained medical professional with a wide ranging scientific curiosity. In my opinion the prestige journals serve primarily the prestige journals, and the lucky few who get published there, and not many more. In my opinion online e-journals will largely supplant them in the future as they really can't continue to leverage their position in the Google age. I have trouble getting pertinent information about some science results such as methodology used to determine a given outcome, especially in the mass media outlets. Ars does a MUCH better job than most in aggregating articles and explaining the hows of a given result. That being said I think it could be MUCH better. The prestige journal model is an artificial monopoly that due to changing circumstance and detractors like Dr Schekman is likely to lose their huge edge unless they change their model. After Chris's opinion piece, getting Dr Schekman to basically write a rebuttal is great.

I'm just a layman, but sometimes I like to read the results of scientific studies that might be relevant to me. Unfortunately, I can't because I'm not subscribed to the expensive journals they're published in.

In the vast majority of cases, you can email the corresponding author, listed on all publications, and they will gladly send you a copy. In fact, the luxury journals have some of the least restrictive copyright rules for the authors.

That said, I look forward to the day when you simply need to click on the download link. . .

Also if you're somehow involved with an University (or ahem just know someone who is) you can use their academic access for most papers.

Although even that isn't a guarantee - as I understand it Universities buy access packages so if you're unlucky your paper isn't included there and you're just out of luck (to be fair at least for my University and in the CS field [i.e. IEEE, ACM and springer] the only articles I often can't get are ancient ones from the 70s that are only of historical interest anyhow).

Some institutions, like mine, are already well ahead of the curve on this.

For example, where I work, University of California at Santa Cruz, publishing in Cell, Science and Nature has actually helped to stifle my progress (and salary) to the point where I will have to consider finding another position.

I agree the holy trinity of journals can be very arbitrary, and in the case of Cell and Nature, sometimes uninformed. However, they are hardly the only factor that impacts the direction and shape of future science. The funding agencies themselves have far more direct impact.

For an ordinary (poor student) bloke, luxury automobiles seem like an absurd waste of money. They work the same and drive the same and get you from point A to point B the same as an 9-year old run-of-the mill used car, at least 90 percent of the time.

But if you have the money, that extra 10 percent of luxury is nice, even if it's overpriced for what you actually get. That 10 percent is something most people don't have. You end up socially defining yourself based on the brand of car you drive, at least partially; and certainly others do. We really do see people who drive luxury cars as "superior" in mysterious ways even if we know it's artificial or flat-out wrong in the case of, say, a gangster in his limousine. Someone will worship it. They must be doing something better than you are, right?

And so-so cars can be purveyed as luxury items via advertising and careful brand management. I won't mention a European brand or two that impressed the neighbors but turned out to be a maintenance nightmare.

I've wondered how people in communist or less materialistic societies perceive cars. Strip the advertising away and is a Benz really any better than a Ford? Do people naturally migrate towards respecting and coveting prestige items? Is following crowd behavior a substitute for assessing quality yourself? Psychology would say yes.

Not all scientific work is equal and we naturally migrate towards somehow recognizing uncommonly excellent work,as most fields do. But luxury journals create a false prestige by making acceptance of manuscripts so uncommon. They (or we) have built a false luxury heirarchy where elitism is a substitute for quality. Instead of following the goals of the crowd as a substitute for deciding quality for yourself---wanting something because everybody else does, which is bad enough---we follow the person in front of us who is following the person in front of them to read the "prestigious" journals (=rare to get published in) and it becomes a blind mob action. People think, 'if it's rare to get published there, then it must be good.' We end up chasing hood ornaments instead of appreciating a good drive on a nice road.

It's interesting that the luxury journals cover such a broad scope of fields and subfields within their pages. Like they're pretending to be expert content selectors on everything. That fosters the blind assumption that they are. And of course they con't have space (and won't create space) to publish the vast majority of submitted papers in each of those subfields, so the illusion persists that they're selecting only the top-quality papers. Wow, only three articles were selected in Science in medical anthropology last year, so they must be the best. But as the author states, it's really only a lottery.

Printed words are printed words, and it's easy and cheap now to send words ton computer screens. The concept of a journal as a brand-name reputation curator has become irrelevant. With the internet, research articles can be electronically organized by theme and subject area. Quality can be decided by a more honest peer review and community acceptance, with results, accolades, caveats and debates being more democratic and rapid. Editors and theme management still have their place, but without the disingenous, brand-centered phoney elitism of luxury journals.

This certainly is a discussion that needs to be had. I know many people on hboth sides of the fence and it just keeps reinforcing the idea that a change is needed regardless who is right. I use research gate for my publications. If I am an author, I can take the mauscript and upload it to research gate where it is free for anyone to read/downlaod. It doesnt really solve anything, just bypasses the paywalls for people.

I think the crux of the matter is curation. The real scarcity is not pages but the time of professional researchers. In my field (cosmology and astrophysics) almost all papers are freely available via the arXiv website in addition to the journal, and to be honest it is through arXiv that I discover most papers. But I can't read all of the papers in my field, it is just physically not possible.

So how do I triage the deluge of papers? I effectively score papers using the following imperfect criteria:

1) It is in my sub-sub field. I have probably already seen the work in a talk, but looking forward to seeing the paper for the details.

2) It is by someone I know that does good work and writes well. Of course this is bad for young researchers or folks from developing countries. But established players have personal brands. If Steve Furlanetto writes a paper I know it is going to be interesting and well written.

3) A colleague recommends it to me. Some people use blogs for this, for me it is usually an email or a mention in a telecon or conference. Good papers are handed around, and discussed in journal clubs.

4) It is in a top journal...

So why do I use item 4? The editors are helping to curate the field for me, sifting through the sea of papers. So in some circumstances top journals serve an important role for up and coming researchers—the journal's stamp of approval helps raise the visibility of work I would not have found otherwise.

I agree impact factors are of little use. But adding yet another journal to the mix makes little sense to me. The real service of top journals is, and has always been, curation of the scientific literature for busy professionals. I'm more than open for better or additional ways of doing this, but peer review is an effective form of curation when incredibly specific knowledge is required.

@Jeremy2013 Thats a good point and summary of the situation but I think one thing that could make a difference would be if the brand did actually represent something tangible. If the more prestigious journals actually did have higher standards of review than the crowd would be right and off-loading some of your own analysis based on popularity would make sense. People outsource their thinking to the crowd all the time, because if enough people are doing something than it usually isn't all bad, the alternative is to be paralyzed by information and decision-making trying to reason about everything you do all the time, it's stressful and a waste of valuable time and resources. Of course as you point out, brand managers and advertising are trying their best to subvert this natural process, to bend the perception of the crowd to their will and for their purpose against the best interest of the crowd (or they wouldn't need the emotional razz-matazz to do it)

So why do I use item 4? The editors are helping to curate the field for me, sifting through the sea of papers. So in some circumstances top journals serve an important role for up and coming researchers—the journal's stamp of approval helps raise the visibility of work I would not have found otherwise.

Ultimately this is the same reason I subscribe to Ars Technica and LWN.net, I've outsourced the curation of my information to some well-connected smarties who sift through the dross and pull out the gems.

Sorry, but until my tenure review board, grant review boards, and academic review board take up some of his suggestions, I'd be shooting myself in the foot to ignore high impact journals.

It's all well and good for the established faculty to bemoan the situation with glamor journals, but until they institute the change (since they occupy all the senior positions), young researchers only face negative consequences for refusing to publish in journals like Science and Nature. Perhaps when/if I am granted tenure, I would be willing to support this, but right now I have a career to watch out for, and graduate students to support. These things are predicated on a system that REQUIRES high impact publications.

This is a change that will require a top-down approach, as until the criteria change, it's too much of a risk for young researchers.

Sorry, but until my tenure review board, grant review boards, and academic review board take up some of his suggestions, I'd be shooting myself in the foot to ignore high impact journals.

It's all well and good for the established faculty to bemoan the situation with glamor journals, but until they institute the change (since they occupy all the senior positions), young researchers only face negative consequences for refusing to publish in journals like Science and Nature. Perhaps when/if I am granted tenure, I would be willing to support this, but right now I have a career to watch out for, and graduate students to support. These things are predicated on a system that REQUIRES high impact publications.

This is a change that will require a top-down approach, as until the criteria change, it's too much of a risk for young researchers.

But that's the problem: why let the luxury journals be the curators of "impact" ? It's phoney. Surely there are better ways to let good research rise to the top. That's a legitimate criticism now of academia---it used to be publish or perish, now it's subscribe to the phoney curation of expertise or perish.

And is your career really riding on getting published in Science or Nature? Good luck.

Why not have a monthly journal called Editors' Picks where editors of all journals submit their best picks for republishing. This would get read more universally where the specialty journals wouldn't be.

Sorry, but until my tenure review board, grant review boards, and academic review board take up some of his suggestions, I'd be shooting myself in the foot to ignore high impact journals.

It's all well and good for the established faculty to bemoan the situation with glamor journals, but until they institute the change (since they occupy all the senior positions), young researchers only face negative consequences for refusing to publish in journals like Science and Nature. Perhaps when/if I am granted tenure, I would be willing to support this, but right now I have a career to watch out for, and graduate students to support. These things are predicated on a system that REQUIRES high impact publications.

This is a change that will require a top-down approach, as until the criteria change, it's too much of a risk for young researchers.

Exactly why it is important for "Made it!" researchers like Schekman to do what he is doing.I think there might be a market for a science paper review publication designed for the scientifically literate. Rather than just reporting findings also do a quick and dirty overview of methods then an analysis of the potential ramifications of the information. Oh wait thats what Ars does.

Sorry, but until my tenure review board, grant review boards, and academic review board take up some of his suggestions, I'd be shooting myself in the foot to ignore high impact journals.

Absolutely,

Less than 24 hours ago I signed the paperwork for my tenure dossier to go forwards for review. My university's guidelines for dossier preparation say the following regarding academic output (emphasis is mine):

Quote:

Scholarship and creative activity are understood to be intellectual work whose significance is validated by peers and which is communicated. As specified in the Promotion and Tenure Guidelines, such work in its diverse forms is based on a high level of professional expertise; must give evidence of originality; must be documented and validated as through peer review or critique; and must be communicated in appropriate ways so as to have impact on or significance for publics beyond the University, or for the discipline itself.

<edited some stuff out>

Where not obvious, the dossier should explain how the work was validated and communicated. It is also important to know the significance of the scholarship and creative activity and the stature of the sources in which they appear. These can be commented on after each listing, and discussed in letters of evaluation from the promotion and tenure committee, the Department Chair, Head, Director, or Dean.

So, you need to tell them the impact, and one of the ways that you do this is by giving the impact factor of the journal you have published in. For 2012 Cell had an impact factor of 31.957, Nature had 38.597, Science 31.027.

In my particular field, the highest impact journal has an impact factor of 3.426

So, at a simplistic level, publishing just one paper in Nature would be viewed as having 10x the impact of my personal highest impact publication. Easy to see that it is in your interest to get into these journals if you can.

I understand that there is a perceived top-down problem, where those lower on the proverbial ladder experience that they 'have to' publish through the currently accepted channels and journals.

However, my perception is that the various review boards are equally under pressure, perhaps not by people or, 'boards' or institutions within, but rather by the industry that is known as academia at large.It is thus that I think that it is not a top-down problem as much as a grid-lock problem.

re: impact factor. Most-cited is not necessarily the most-read. If a journal is on the internet then actual readership (page views) can be tracked. Both are important.

Much as we can't really comment on the sound of falling trees in deserted forests, if something is read but not cited, we have no good way of knowing that it has contributed to the growth of knowledge. And in the end, that's what the process of tenure and grant applications is all about (or should be).

re: impact factor. Most-cited is not necessarily the most-read. If a journal is on the internet then actual readership (page views) can be tracked. Both are important.

Much as we can't really comment on the sound of falling trees in deserted forests, if something is read but not cited, we have no good way of knowing that it has contributed to the growth of knowledge. And in the end, that's what the process of tenure and grant applications is all about (or should be).

But of course most-read isn't most useful. The disastrous Andrew Wakefield MMR-autism paper was both heavily cited and heavily read while contributing practically no useful new knowledge whatsoever. Personally, I remain thoroughly unconvinced that the impact of a paper can be reduced meaningfully to a matter of numbers, as seductive as that notion might be.

re: impact factor. Most-cited is not necessarily the most-read. If a journal is on the internet then actual readership (page views) can be tracked. Both are important.

Much as we can't really comment on the sound of falling trees in deserted forests, if something is read but not cited, we have no good way of knowing that it has contributed to the growth of knowledge. And in the end, that's what the process of tenure and grant applications is all about (or should be).

Exactly. I read a lot of stuff that I don't cite. For good reason...

Whatever metric anyone proposes, it will be gamed. We need to find the metric that incentivizes people in the right way to produce the best science they can. Counting the number of publications will lead people to write trivial publications; counting high-impact journal publications will lead to overblown claims (the situation we're in now); counting the number of citations will create bandwagons and strategic referencing. If there is a good metric out there, I am not aware of it.

Actually the best is probably letters of recommendation: people who are knowledgable talking about the impact. Drives us hard science types who like a number crazy, but sometimes the best evaluation tools are qualitative.

The Conversation / The Conversation is an independent source of news and views, sourced from the academic and research community. Our team of editors work with these experts to share their knowledge with the wider public. Our aim is to allow for better understanding of current affairs and complex issues, and hopefully improve the quality of public discourse on them.