Earlier today Transparency International, arguably the world’s most well-known anti-corruption NGO, published the 2016 Corruption Perceptions Index (CPI). Despite taking plenty of criticism over the years, the CPI has become an integral part of the global anti-corruption discussion. Proponents and critics alike are rarely slow in coming forward to unpack its findings. SCSC Director Dan Hough analyses why the CPI manages to generate such controversy.

Some things are almost guaranteed to prompt a reaction. Transparency International’s Corruption Perceptions Index is one of those such things. Very few people sit on the fence and claim to be non-plussed by what has become the most well-known attempt to get a feel for how much corruption may plausibly exist. Some commentators use it as an indicator of how much needs to be done, others regard it as an (often forlorn) exercise in quantifying the unquantifiable. Very rarely is anyone ever neutral about it.

Before launching in to any critique of the CPI, it’s worth remembering what exactly the index is and where it has come from. The CPI is a composite index. A variety of data sources are used to create what is in effect a poll of polls on perceptions of corruption in around the world. Data is gathered from surveys of business people and country experts with the aim of measuring perceived levels of public sector corruption. TI provides a detailed account of where its data comes from and also how it uses it, and this is accessible via TI’s own website.

The Results

The CPI was first published in 1995 when it included 41 countries, with New Zealand achieving the best score (i.e. nearest to 10) and Indonesia the worst (nearest to 0). Over the years TI has changed the way it presents and indeed produces its results, with the range of scores now stretching from 100 (no corruption) to 0 (complete corruption).

This year’s CPI included 176 countries (although in 2011 it included as many as 183), with Denmark (90) and New Zealand (90) at the top of the pile and Somalia (10) at the bottom (for the tenth year running). Predictably, the best performing countries share a significant number of characteristics. They are generally open, liberal democracies with a free press. They embrace the notion of transparency, therefore helping citizens see where their hard-earned tax money gets spent. They have independent judiciaries, and all support long-held assumptions about increased accountability leading to lower levels of corruption.

The Scandinavians always do very well, as do countries in western Europe more generally. There are, however, always interesting outliers when compared to their regional peers; Singapore (7th in 2016) regularly appears in the top 10, whilst Botswana (joint 35th) leaves many of its African counterparts in its wake. The countries at the bottom, meanwhile, also have lots in common; leaving North Korea (12) to one side, they are war-torn and bordering on the ungovernable. The fact that South Sudan (11), Syria (13), Yemen (14), Sudan (14) and Libya (15) are immediately above Somalia (10) is evidence of that.

Criticisms of the CPI

The CPI’s prominence has certainly not shielded it from criticism. Indeed, criticising the methodology that underpins the CPI has become a veritable cottage industry (see here, here and here). These criticisms cover a number of issues. To start with, boiling down a country’s corruption troubles to one score is, to put it mildly, methodologically problematic. The type, scope and extent of corruption evident in, say, the city administration of Chicago is likely to be altogether different to that which you’ll find in, for example, rural Arizona. Computing one score to accurately cover such variety is always going to be very difficult. Plus, if a state were ever to register a score of 100, what precisely would that mean? What does a country that apparently has no corruption at all actually look like? Or, conversely, what would a country that scored 0 (totally corrupt) look like? Any discussion of utopias usually ends in disagreement, and it is very likely that that would be the case here.

Furthermore, measuring concepts such as democracy, justice, fairness and indeed corruption is hard at the best of times, but those who do it well acknowledge that their attempts are always approximations. Indeed, statisticians have developed their own language to discuss the problems in getting these measurements right. Even though the CPI’s methodology has undoubtedly got more rigorous over the years, none of this is overtly acknowledged. This has led some researchers to cast doubt on whether this data can be put to any real use at all.

One particular challenge concerns the problem of defining corruption. It is not always clear what respondents actually understand the term corruption to mean. The terms bribery and corruption often appear to be used interchangeably plus responses to the various surveys are very likely to be shaped by – whether directly or indirectly – the assumptions and attitudes of the western business community. That is the case for the simple reason that the majority of people asked have roots in this particular milieu.

The problem of perception

The CPI also measures perceptions of corruption rather than corruption itself. TI regularly and consistently acknowledges that this can be problematic. While knowing more about how citizens perceive a phenomenon certainly has its uses, it is also plausible that perception and reality might differ considerably. As Ritva Reinikka and Jakob Svensson succinctly note “perception indices raise concerns about biases”. These potential biases may well mean that the CPI is actually (and inadvertently) distorting reality, and simply reinforcing stereotypes and clichés.

A further limitation is that the CPI focuses on perceptions of public sector corruption. In other words, the corruption that takes place in and around governments and public servants. It says nothing about corruption in private business. The rigging of Libor (the rate of interest that banks charge each other when lending money between themselves) in Britain, for example, or the VW emissions controversies in the United States (and indeed elsewhere) involve private actors, but they have very real public impacts, whether on the interest rates that people pay on their mortgages or on public health.

Babies and Bathwater

These problems have prompted a significant number of analysts to be quite scathing about the CPI. Steve Sampson, speaking for many in the development studies community, is sceptical of what he regards as “corruption becoming a scientific concept”. Even fellow quantifiers such as Stephen Knack have criticized some of the statistical techniques that TI has employed in the past. Indeed, Anwar Shah and Theresa Thompson leave no one in any doubt as to how grave they think the CPI’s methodological shortcomings are when they state that “closer scrutiny of the methodology … raises serious doubts about the usefulness of aggregated measures of corruption” and “potential bias introduced by measurement errors lead to the conclusion that these measures are unlikely to be reliable, especially when employed in econometric analyses”. Knack’s careful dissection of the CPI raises further significant issues about the independence – in a statistical sense – of the data used, claiming that many of the ‘statistically significant’ changes that TI claims to have uncovered would not in reality be so if “appropriate corrections for interdependence” had been made.

Facing down the criticisms

For its part, TI has certainly tried its level best both to be open about the methodological shortcomings of the CPI (as well as its other corruption indices) and also to adjust them wherever possible. The founder of the CPI index, Johann Graf Lambsdorff, for example, is careful to acknowledge some of the methodological issues inherent in all composite indicators and he is always careful to describe changes in country scores from year to year as changes in perceived corruption rather in actual corruption levels.

TI has also tacitly admitted that the CPI has its limitations by the very fact that it has developed a whole host of other indices – such as the Bribe Payers Index and the Global Corruption Barometer – to look at both the perceptions and experiences of specific groups of stakeholders (ranging from businessmen to households).

And yet, all these criticisms not withstanding, the CPI has done one indisputable thing; it has put the issues of corruption and anti-corruption well and truly on the policy map. As Andersson and Heywood observe;

“We should not underplay its significance in the fight against corruption: its value goes beyond the stimulation of research activity, since the publication of the CPI each autumn has generated widespread media interest across the world and contributed to galvanising international anti-corruption initiatives, such as those sponsored by the World Bank and the OECD”.

Even staunch critics of the quantification of corruption have begrudgingly admitted that “whatever its limitations” the development of the CPI has “undoubtedly done much to promote the anti-corruption agenda”. It is also doubtful that any of the more nuanced indices that both TI itself and other organizations have developed would have seen the light of day if the CPI hadn’t existed before them.

The CPI certainly doesn’t represent the gospel in terms of global levels of corruption. There are plenty of problems with its methodology and subsequently with its findings. But anyone who takes the detailed numbers produced in the CPI too seriously is missing the point. The CPI, for all its sins, has a constructive role to play in helping us think just a little more about how we can better measure corruption and how the battle against corruption can subsequently be taken forward,