Scientific research is a type of creative pursuit. By definition, journal articles are supposed to report on what is new or novel. Once you buy that, the low citation rates in science make sense. First, creativity (or importance) is a scarce commodity. Anyone trained in a psychology graduate program can do an experiment, but few can do a novel experiment. Second, new results are themselves scarce. Fields quickly get covered and only obscure points remain. Third, even if you have a creative scientist who found a genuinely important problem, they might not have an audience. Perhaps people are focused on other issues, or the scientist is low status or publishing in a low status journal.

In principle, we should expect that few articles will deserve more than token citation. But still, why can’t journals just stick to important stuff? The answer is imperfect knowledge. Once in a while we encounter obvious innovation, but usually we have a limited ability to predict what will be important. It is better to over publish and let history be the judge. Considering that the cost of journal publishing is low (but not the subscription!), we should be ok with a world of many uncited and lonely articles.

Howard says nobody knows … but that’s why a small subset of researchers get extreme citations. They rely on high status signals to substitute for judgment and effort.

It would be interesting to see if academia abolished using names and affiliations in papers. Instead every paper would have an anonymous number associated with it. Would the same people get cited? Or would the distribution of citations change?