The BMJ takes on impact factors

As the impact factors of science journals are becoming widely-recognized …

With an ever-growing roster of specialized scientific publications out there, how are scientists, funding agencies, and academic institutions supposed to decide which ones to take seriously? Anyone in the field and many outside of it know that we rate a publication in Nature as being more significant than one in The Journal of Obscure, Small Invertebrates, but what about all of the journals in between? Is getting something into Genes and Development more or less significant than success with plain old Development?

One solution to these questions that's become widely used within the scientific community is the Impact Factor, a measure of how significant a splash different publications make. Generated by Thomson Scientific, the Impact Factor is a measure of how often academic works in a publication get cited in all other publications. The more citations, the higher the impact.

Over time, Impact Factors have been adopted by everyone from granting agencies and academic institutions, which use them to weigh the significance of what otherwise might be a long and bewildering list of publications. They are even considered in the process by which British academic funding is allocated. In response, authors themselves have started considering impact factors when searching for a place to send hot results, and some journals have gone as far as placing their impact factor on their home page.

The BMJ has commissioned a series of articles (the first abstract has links to the remaining ones) taking a look at the Impact Factor system and all the issues with it. Many of these are structural: as few as 10 percent of a given journal's publications can account for the majority of its citations, while as many as half of the papers out there never get cited. Meanwhile, papers that are flawed or fraudulent can set off a series of corrections, correspondence, and critical responses, all of which can actually raise the impact of the journal publishing the questionable work.

But the collection of essays also open an interesting window into steps those in the publishing industry can (and apparently do) take to skew the impact factor. For one, you should note the vagueness in the term "academic works" above. Given that the impact factor is divided by the total number of articles included in the calculations, excluding anything that's unlikely to be cited can make significant improvements. Large publishers can afford to have some of their staff work with Thomson Scientific to ensure that as many articles as possible get excluded. Smaller publishers are probably out of luck.

There are other, positive steps that can be taken. Reviews of the current state of knowledge tend to be cited frequently in subsequent publications, so tweaking the ratio of review to data publication can up the Impact Factor. Research summaries that appear in many journals to highlight work appearing in the same issue also bump up the citations, even though these short summaries don't count as part of the total articles. Year-end summaries of research also perform the same function. All of this can be done without even getting into making editorial decisions regarding whether to avoid topics that are less popular.

It's clear that the Impact Factor does allow some discrimination; journals like Science, Nature, and The New England Journal of Medicine are widely respected and have a high Impact Factor. But that does little more than tell us what we already knew. Impact Factors have the greatest utility for providing a sense of the relative merits of the vast field of journals in the middle of the pack, where the differences among the journals are small. Unfortunately, these small differences are exactly the ones most susceptible to manipulations such as those described in BMJ.