Jorge Hirsch (2005a, 2005b) recently proposed a new research performance indicator that is designed for application at the micro level. The Hirsch index, or h index, quantifies as a single-number criterion the scientific output of a single researcher. Hirsch’s (2005b) index is an original and simple new measure incorporating both quantity and visibility of publications (van Raan 2006): “A scientist has index h if h of his or her Np papers have at least h citations each and the other (Np − h) papers have fewer than ≤h citations each” (Hirsch 2005a: 16569). A h index of 40 means, for example, that a scientist has published 40 papers that each had at least 40 citations. A scientist’s h index will never decrease (Sidiropoulos et al. 2006); an increase is to be expected as new (high-impact) papers are published, as ‘sleeping beauties’ (van Raan 2004b) come to life, and as the scientist’s papers attract citations (Cronin and Meho 2006; Hirsch 2005a). h = 0 characterizes inactive scientific authors (Glänzel 2006) that have at best published papers that have had no visible impact.

The proposed new measure of research performance was quickly taken up by Nature (Ball 2005) and Science (Anon 2005). The idea of ranking scientists by a single number and the alleged advantages that the h index has over other citation-based indices (for example, total number of papers, total number of citations, or citations per paper) attracted the attention of scientific news editors. The h index is seen to have the advantage that it gives a robust estimate of the broad impact of a scientist’s cumulative research contributions (Cronin and Meho 2006; Hirsch 2005a). This means that the h index is insensitive to a set of lowly cited (non cited) papers or to one or several highly cited papers: A scientist with very few highly cited papers (a ‘one-hit wonder’) or, alternatively, many lowly cited papers will have a weak h index (see Table 1). A further advantage seen for the h index is that the necessary data for calculation is easy to access in the Thomson Scientific (Philadelphia, Pennsylvania) Web of Science database without the need for any off-line data processing (Batista et al. 2005). The index can be calculated sorting a set of papers (co-authored by one scientist) using the ‘times cited’ option: scroll down the Web of Science output until the rank of the paper (in terms of citations) is greater than the number of citations that it has. The preceding rank equals the h index (Kelly and Jennions 2006).

Table 1: Two scientists with the same h index: scientist A with very fewhighly cited papers and scientist B with many lowly cited papers.

Paper

Citations

Scientist A

Scientist B

1

51

6

2

34

5

3

29

4

4

22

4

5

3

3

6

1

3

7

0

2

8

2

9

1

10

0

11

0

h

4

4

Because of the advantages that the h index offers as an evaluative bibliometric measure, the h index has found widespread positive reception. As an alternative to other citation-based indices that could be used to measure research performance, however, some critical objections to the new index have been raised. Van Raan (2006), for instance, states, “it is not wise to force the assessment of researchers or of research groups into just one measure, because it reinforces the opinion … that scientific performance can be expressed simply by one note” (p. 501). Several indicators are necessary in order to illuminate different aspects of performance (van Raan 2006) and to provide a more adequate and multifaceted picture of reality (Glänzel 2006). According to Sidiropoulos et al. (2006) the h index “has various shortcomings, mainly of its inability to differentiate between active and inactive (or retired) scientists and its weakness to differentiate between significant works in the past (but not any more) and the works which are ‘trendy’ or the works which continue to shape the scientific thinking”. Since h values (that is, published papers and the citations papers receive) increase over time (Egghe 2006; Hirsch 2005a), it is apparent that a scientist’s h index depends on the scientist’s scientific age (that is, years publishing, Glänzel 2006; Roediger III 2006). Therefore, in ranking scientists, the h indexputs always newcomers at a disadvantage and older, well-established scientists at a advantage (Cronin and Meho 2006; Glänzel 2006). It should also be considered that when using the h index for comparison purposes, there are discipline-dependent citation patterns in science (Bornmann and Daniel, accepted for publication-a; Hirsch 2005a) that are determined by the average number of citations in a paper in a given research field, the average number of papers produced by each scientist in the field, the size of the field (number of scientists), and the attractiveness of the research area (mainstream or non-mainstream area). Because of these discipline-dependent citation conventions, higher h indices can be expected in some areas of research than in others (Iglesias and Pecharromán 2006).

The findings by Hirsch (2005a), Cronin and Meho (2006), Bornmann and Daniel (2005), van Raan (2006), and Kelly and Jennions (2006) on the convergent validity of the h index in different research fields indicate that the h index is a valid indicator for research performance at the micro and meso level. However, Lehmann et al. (2005) found that the h index lacks the necessary accuracy and precision to be useful. As there has as yet been no thorough validation of the h index – that is, cross-discipline validation on the basis of broad statistical data – for various areas of application, the h index with the current state of research should not (yet) be used as a criterion to inform decision making in science (Bornmann and Daniel, accepted for publication-b). Only when these studies have been conducted and can confirm the validity of the h index should the new measure be implemented. However, as the h index has some disadvantages – just as do many other evaluative bibliometric indicators – it should, for evaluative purposes, always be applied as an addition and not as a substitute (Egghe and Rousseau 2006; Glänzel 2006) for other indicators that have become established standards in recent years (van Raan 2004a).

Glänzel, Wolfgang, 2006: On the Opportunities and Limitations of the H-Index. Science Focus, 1(1): 10-11.

Hirsch, Jorge E., 2005a: An Index to quantify an Individual's Scientific Research Output. Proceedings of the National Academy of Sciences of the United States of America, 102(46): 16569-16572. [Retrieved 31.10.2006]