Why do we care where we publish?

We delve into the relationship between the significance of a particular …

Let me preface this piece by saying that it is purely a work of opinion. However, I would like to think that being both a working scientist and a science writer gives me a unique perspective on science, scientific publications, and the significance of scientific work. The final disclosure should be that I have never published in any of the top rank physics journals or in Science, Nature, orPNAS. I don't believe I have an axe to grind about that, but I am also sure that you can ascribe some of my opinions to PNAS envy.

If you asked most scientists what their goals were, the answer would boil down to the generation of new knowledge. But, at some point, science and scientists have to interact with money and administrators, which has significant consequences for science. For instance, when trying to employ someone to do a job, you try to objectively decide if the skills set of the prospective employee matches that required to do the job. In science, the same question has to be asked—instead of being asked once per job interview, however, this question gets asked all the time.

Why? Because science requires funding, and no one gets a lifetime dollop-o-cash to explore their favorite corner of the universe. So, the question gets broken down to "how competent is the scientist?" "Is the question they want to answer interesting?" "Do they have the resources to do what they say they will?" We will ignore the last question and focus on the first two.

Assessing quality

How can we assess the competence of a scientist? Past performance is, realistically, the only way to judge future performance. Past performance can only be assessed by looking at their publications. Were they in a similar area? Are they considered significant? Are they numerous? Curiously, though, the second question is also answered by looking at publications—if a topic is considered significant, then there will be lots of publications in that area, and those publications will be of more general interest, and so end up in higher ranking journals.

So we end up in the situation that the editors of major journals are in the position to influence the direction of scientific funding, meaning that there is a huge incentive for everyone to make damn sure that their work ends up in Science or Nature. But why are Science,Nature, and PNAS considered the place to put significant work? Why isn't a new optical phenomena, published in Optics Express, as important as a new optical phenomena published in Science?

This is actually a complicated question, where there is a lot of self-reinforcement. The big three try to be general; they will, in principle, publish reports from any discipline, and they anticipate readership from a range of disciplines. This explicit generality means that the scientific results must not only be of general interest, but also highly significant. The remaining journals become more specialized, covering perhaps only physics, or optics, or even just optical networking. However, they all claim to only publish work that is highly original in nature.

Are standards really so different? Naturally, the more specialized a journal is, the fewer people it appeals to. However, the major difference in determining originality is one of degree and referee. A more specialized journal has more detailed articles, so the differences between experiments stand out more obviously, while appealing to general interest changes the emphasis of the article away from details toward broad conclusions.

The problem with this is that, as the audience becomes broader, more technical details get left by the wayside. Note that none of the gene sequences published in Science have the actual experimental and analysis details. What ends up published is really a broad-brush description of the work, with the important details either languishing as supplemental information, or even published elsewhere, in a more suitable journal. Yet, the high profile paper will get all the citations, while the more detailed—the unkind would say accurate—description of the work gets no attention.

And that is how journals are ranked. Count the number of citations for each journal per volume, run it through a magic number generator, and the impact factor jumps out (make your checks out to ISI Thomson please). That leaves us with the following formula: grants require high impact publications, high impact publications need citations, and that means putting research in a journal that gets lots of citations. Grants follow the concepts that appear to be currently significant, and that's decided by work that is published in high impact journals.

Doing science vs. writing science

This system would be fine if it did not ignore the fact that performing science and reporting scientific results are two very different skills, and not everyone has both in equal quantity. The difference between a Nature-worthy finding and a not-Nature-worthy finding is often in the quality of the writing. How skillfully can I relate this bit of research back to general or topical interests? It really is this simple. Over the years, I have seen quite a few physics papers with exaggerated claims of significance (or even results) make it into top flight journals, and the only differences I can see between those works and similar works published elsewhere is that the presentation and level of detail are different.

I see this problem as seeping out beyond the scientific community, because articles from the big three are much easier to cover on Nobel Intent than articles from, say Physical Review D. Nevertheless, when we do cover them, sometimes the researchers suddenly realize that they could have gotten a lot more mileage out of their work. It changes their approach to reporting their results, which I see as evidence that writing skill counts for as much as scientific quality.

If that observation is generally true, then it raises questions about the whole process of evaluating a researcher's competence and a field's significance, because good writers corrupt the process by publishing less significant work in journals that only publish significant findings. In fact, I think it goes further than that, because Science, Nature, and PNAS actively promote themselves as scientific compasses. Want to find the most interesting and significant research? Read PNAS.

The publishers do this by extensively publicizing science that appears in their own journals. Their news sections primarily summarize work published in the same issue of the same magazine. This lets them create a double-whammy of scientific significance—not only was the work published in Nature, they also summarized it in their News and Views section.

Furthermore, the top three work very hard at getting other journalists to cover their articles. This is easy to see by simply looking at Nobel Intent's coverage. Most of the work we discuss comes from Science and Nature. Is this because we only read those two publications? No, but they tell us ahead of time what is interesting in their upcoming issue. They even provide short summaries of many papers that practically guide people through writing the story, meaning reporter Jim at the local daily doesn't need a science degree to cover the science beat.

Very few of the other journals do this. I don't get early access to the Physical Review series, even though I love reporting from them. In fact, until this year, they didn't even highlight interesting papers for their own readers. This makes it incredibly hard for a science reporter to cover science outside of the major journals. The knock-on effect is that Applied Physics Letters never appears in the news, which means you can't evaluate recent news coverage to figure out what's of general interest, leaving you with... well, the big three journals again, which mostly report on themselves. On the other hand, if a particular scientific topic does start to receive some press attention, it is much more likely that similar work will suddenly be acceptable in the big three journals.

Judging significance

That said, I should point out that judging the significance of scientific work is a process fraught with difficulty. Why do you think it takes around 10 years from the publication of first results through to obtaining a Nobel Prize? Because it can take that long for the implications of the results to sink in—or, more commonly, sink without trace.

I don't think that we can reasonably expect journal editors and peer reviewers to accurately assess the significance (general or otherwise) of a new piece of research. There are, of course, exceptions: the first genome sequences, the first observation that the rate of the expansion of the universe is changing. But the point is that these are exceptions, and most work's significance is far more ambiguous, and even goes unrecognized (or over-celebrated) by scientists in the field.

The conclusion is that the top three journals are significantly gamed by scientists who are trying to get ahead in their careers—citations always lag a few years behind, so a PNAS paper with less than ten citations can look good for quite a few years, even compared to an Optics Letters with 50 citations. The top three journals overtly encourage this, because it is to their advantage if everyone agrees that they are the source of the most interesting science. Consequently, scientists who are more honest in self-assessing their work, or who simply aren't word-smiths, end up losing out.

I do not know if this problem can be solved because every system can be gamed. However, there are some steps that could be taken. First, Science and Nature should either eliminate their News and Views section, or implement a policy of not reporting on their own articles. This would open up one of the major sources of "science news for scientists" to stories originating in other journals.

Second, scientific competence should not be judged by how many citations the author's work has received or where it was published. Instead, we should consider using a mathematical graph analysis to look at the networks of publications and citations, which should help us judge how central to a field a particular researcher is. This would have the positive influence of a publication mattering less than who thought it was important.

Chris Lee / Chris writes for Ars Technica's science section. A physicist by day and science writer by night, he specializes in quantum physics and optics. He lives and works in Eindhoven, the Netherlands.