ScienceOnline 09: Beyond the valley of the impact factor

Concerns over the nature of Impact Factors and how they're used and abused …

Our own Adam Stevenson has just described how Google's PageRank might provide a better way of�analyzing�the significance of scientific�publications; it's�a timely study, considering one of the sessions from the second day of ScienceOnline'09.

As we've writtenaboutbefore, there are some significant problems with the way that impact factors are used and abused by the scientific community, problems with the way the numbers are generated by Thomson Reuters, and worries that an over-reliance on those numbers by funding agencies and university tenure committees is having a deleterious effect on how scientists work. Adam delves into these issues in detail, so this article will focus on some potential solutions, as well as other issues faced by scientific publishing that were discussed at ScienceOnline'09.

PLoS ONE, PLoS's multidisciplinary journal that's fast joining Nature, Science, and PNAS at the top tier of non-specialist science publications, has a pretty simple solution: it's actively eschewing impact factors. You can read more about those efforts at The Scientist and over at Open Access News.

The DOI, a unique digital identifier, came up. We like DOIs here at Ars Technica--since we're an online format and the links we provide are to online articles, it didn't make a lot of sense to provide a citation in the form of year/issue/page. DOIs can be resolved to the journal's website via doi.org, and (mostly) everyone's happy. Some journals are now asking for DOIs to be included in their publication's references, and doing so should help identify just who is or isn't citing your work. It was pointed out that DOIs on their own aren't a perfect solution, since a typo of even one character would throw off the system but, provided they're combined with other bibliographic data, this should be less of an issue.

Another idea that was mentioned was that we could also use unique identifiers that follow scientists through their careers. This would certainly avoid the problem of conflating two researchers who have the same name, or not finding all of an author's work. There's another Jonathan Gitlin who publishes a lot, and I've taken to using my middle initial now to avoid being mistaken for him, but there are actually two John R. Timmers publishing, so even that solution has its limits. You can find more about that idea from Cameron Neylon.

Finally, there's also the crowdsource option. This would take advantage of sites like CiteULike, which allows users to upload papers they like. They can be tracked using data provided by journals that details the number of downloads or page views that individual articles received, which might be the truest metric of just how widely (or narrowly) an actual impact a paper is having.

The only real impediment that I can see on the horizon is the same one I identified in my post on video journals: convincing the more technophobic and conservative members of the academic world to get on board.