Friday, 25 May 2012

Are leading papers in an issue of a journal of better “quality”?

Given my previous posting the above question seems relevant. A new column, by Victor Ginsburgh, at VoxEU.org argues that lead articles in academic journals tend to receive more citations than other articles. But does this mean they are any better? Ginsburgh's column suggests that two-thirds of the additional citations that leading papers receive seem to be due to coming first in the journal, while only one-third are because they are genuinely better quality.

Ginsburgh writes

There exists a lively debate among scientists about evaluation methods. Some prefer peer review-based research assessments, while others think that bibliometric citation-based methods should be used as a verifiable mechanism for promotion and distribution of public research funds. Like peer reviews, but for other reasons, citations suffer from several problems. One of them is that they are related to the order in which editors arrange the sequence of papers in each issue of a journal. Research by Smart and Waldfogel (1996), Ayres and Vars (2000), Pinkowitz (2000), and Hudson (2007) finds that leading articles – those at the front of the journal – get more cites than others. This is tested by running regressions of the number of cites on the order in which the paper is placed and on some control variables.

Readers thus seem to believe that the editors of journals are smart enough to pick the ‘best’ paper ready for the coming issue and choose it as a leading paper. They also believe that the paper editors find to be the best is actually the best.

In recent work with co-authors (Coupé et al. 2010), I run an analysis that compares the number of cites conditional on ordering, in two types of publication strategies: random versus selectively ordered ranking of papers. The European Economic Review (EER) provides a natural experiment due to an editorial quirk.

Between 1975 and 1997, the initial of the first author’s surname was used to order papers in some issues; in others it was not so. As long as we are ready to accept that the alphabetical order is random, in the sense that on average it cannot help separate good and bad papers, this can be considered a natural experiment. This allows us to untangle whether leading papers are more cited because they lead or because they are of higher quality.

If in alphabetically ordered issues, leading papers also get more cites than others, then one can wonder whether editors really have a good guess at quality when they use their judgment in ordering. If this were the case, leading papers are more cited because they are leading (and readers expect them to be better) and not because they are of better quality.

To check for consistency, we also compare this with cites to papers in American Economic Review (AER), where, except by chance, the order is never alphabetical.

The empirical results? Ginsburgh writes,

Our results show:

Leading papers get marginally more cites in all three types of journals (EER alphabetically ordered, EER non-alphabetically ordered, and AER).

As expected, the effect for AER is much larger than for EER.

But the difference in the mean number of cites between AER and EER papers is not very large (5 vs 2 cites). Moreover, for EER the difference in the marginal effect on citations of the first paper is not very different for alphabetical and non-alphabetical issues (1.9 v. 2.8), though a likelihood ratio test shows that the difference is statistically significantly different from zero.
This suggests that the lead article when editors exercise discretion is of better quality, but citation numbers overstate how much better it is. Based on the estimates, two thirds of the effect (1.9/2.8) is the result of going first, while one third only can be attributed to better quality. Note that while there is no difference between first and second paper in AER, for EER cites decrease after the first paper.

Long papers are more cited than short ones, and notes are usually less cited (for AER the difference is quite large). The sequence of annual dummies that represent the year of publication, and thus the age of the paper in 2000, pick up coefficients that are declining in the case of AER: recent papers get less (cumulated) cites. The coefficients show no particular trend for EER. One possible reason may be that the natural decrease of cites for more recent papers is compensated by more cites due to increasing average quality of EER over time.

The ordering by the initial of the name may not be entirely random, since, in economics, names usually appear in alphabetical order. It is thus possible that lead papers in alphabetically ordered issues are more likely to be co-authored. To the extent that such papers get more cites, either because they are of better quality or because of more self-citations, the lead article effect may simply capture the influence of a larger number of co-authors. This was controlled by including the number of authors as a variable. Its effect is positive and highly significant in the case of discretionary ordering, both in EER and AER, but insignificant in alphabetical issues. More importantly, however, the inclusion of this variable, even when highly significant, did not change the sign and significance of the main variable of interest. This thus suggests that the estimated effect is purely a ‘lead article effect’.

Ginsburgh ends by noting,

Our paper was published as a leading paper in issue 61(1) of Oxford Economic Papers. Did the editor want to make a joke? Or did he think it was better than the other papers published in the same issue? If I were you, I wouldn’t believe him.

Personally I have only been the lead article once, and that paper has never been cited!