Pages

Sunday, 24 July 2016

There is hard evidence that academics and journals are manipulating citations to (unfairly) improve their reputation. There is a summary of this and a call for more evidence in the post "What do we know about journal citation cartels? A call for information" at https://www.cwts.nl/blog?article=n-q2w2b4

However, these deliberately fraudulent cases are simply the extreme end of a spectrum of practices which need examination, understanding and (of course) simulation modelling. It is well known that many referees demand that papers cite certain papers (e.g. their own) as a condition of publication. It is well known that in some fields, there is a strong social norm that one should spend the first 2-5 slides of any presentation citing previous work (whether relevant or not). This demand that outsiders should learn the 'key' references before being allowed to present their research has advantages in terms of not repeating past debates/research and to aid the coherence of the field, but it is also an effective means of excluding outsiders and ensuring insiders are well cited.

There have now been a stream of simulations that touch on the processes of peer review, but a wider set of simulations and analysis of science are needed that focus upon the tendencies of fields to become inward-looking to the extent that some look more like cartels than academic discussions.

Abstract

A growing interest in and
concern about the adequacy and fairness of modern peer-review practices
in publication and funding are apparent across a wide range of
scientific disciplines. Although questions about reliability,
accountability, reviewer bias, and competence have been raised, there
has been very little direct research on these variables.

The
present investigation was an attempt to study the peer-review process
directly, in the natural setting of actual journal referee evaluations
of submitted manuscripts. As test materials we selected 12 already
published research articles by investigators from prestigious and highly
productive American psychology departments, one article from each of 12
highly regarded and widely read American psychology journals with high
rejection rates (80%) and nonblind refereeing practices.

With
fictitious names and institutions substituted for the original ones
(e.g., Tri-Valley Center for Human Potential), the altered manuscripts
were formally resubmitted to the journals that had originally refereed
and published them 18 to 32 months earlier. Of the sample of 38 editors
and reviewers, only three (8%) detected the resubmissions. This result
allowed nine of the 12 articles to continue through the review process
to receive an actual evaluation: eight of the nine were rejected.
Sixteen of the 18 referees (89%) recommended against publication and the
editors concurred. The grounds for rejection were in many cases
described as “serious methodological flaws.” A number of possible
interpretations of these data are reviewed and evaluated.

I am at a nice interdisciplinary workshop in Berlin on "Coherency-Based Approaches to Decision Making, Cognition and Communication.

The basic ideas come from Thagard (1989), that human 'reasoning' happens in a way to ensure coherency between beliefs rather than the classical logicist picture of reasoning from evidence to conclusions. For example, as well as forward inference, backward inference from conclusions to the evaluation of evidence is common. This theory has now a reasonable amount of evidence to support it and has been extended to include emotions and goals (Thagard 2006)

As well as Thagard there was a nice talk by Dan Simon of University of South Carolina, explaining many identified 'biases' in human reasoning using the coherency framework. If you want details see his paper on SSRN at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=439984. One of the biases he talked about was confirmation bias and cited a nice paper that I did not know about about Peer Review. This is:

In this the author sent papers to a selection of reviewers where it was known whether the reviewer agreed or disagreed with the conclusion of the paper (on a controversial issue). The results were that the judgement of the reviewers on the quality of the paper were substantially affected by whether the reviewer agreed with the conclusion.

The coherency model of thought seems to be a good basis for modelling the judgement of reviewers within simulations of the peer review system.

This Special Issue solicits social scientific articles examining not
only traditional forms of misconduct, but also modalities of misconduct
that are meant to “game” the modern metrics-based regimes of academic
evaluation. If traditional misconduct – fabrication, falsification, and
plagiarism – concerned fraudulent ways to produce scholarly
publications, much of the new misconduct targets the publication system
itself, for example by producing fake peer reviews, citation rings among
authors and journals, or by publishing articles in dubious journals.

This is a blog (now) associated with the European Social Simulation Assocation SIG on "Simulating the Social Processes of Science". For all queries about the SIG, or items to post here please contact Bruce Edmonds "bruce at edmonds dot name". Thanks