This article draws heavily on Dusek, Ortmann & Lizal (2005) which it shortens considerably but

also updates.

Introduction

Corruption remains an important policy concern in virtually all countries. Due to its secretive nature, the extent and pervasiveness of corruption is difficult to assess although examples of creditable assessment tools (e.g., CPI of TI (www.transparency.org; Treisman 2000), or the new V4 City Corruption Propensity Index of Transparency International CR (www.transparency.cz;

Ortmann 2004) provide approximations that suggest that the available hard data (e.g., criminal convictions) are but the tip of the iceberg. Also, due to its secretive nature, it is also difficult to identify what entices people to become corrupt.

Specifically, while the evidence suggests that low economic development, federal structure and short histories of experience with democracy and free trade all favor corruption on the macro-level (Treisman 2000), it is poorly understood what exactly, on the micro-level, the determinants of corruptibility are and what institutional arrangements could be used to fight (the causes of) corruption, if at all.

Among the specific questions in need of an answer are: How important are detection probabilities for bribe-giving and bribe-taking? How important are the threatened penalties? Are detection probabilities correctly perceived? Can the perception of detection probabilities be systematically manipulated (e.g., by going after high-visibility violators rather than routine violators)? Is corruptibility also a function of people’s perception of the pervasiveness of corruption in society? Do efficiency wages, or loyalty premia, reduce the susceptibility of public officials (e.g., police or customs officers) to corruption? How about staff rotation? Are the leniency provisions in Czech law likely to accomplish what they are meant to accomplish? Is it really a good idea to distrust the 85 % of police officers that seem be honest (even in the Czech Republic) by installing monitoring devices in their cars, and to switch to cash-less transactions rather than rely on whistleblowers and hot-lines to go after the 15 % of the force that are not? Is it really a good idea to have local detectives pursue small acts of bribery rather than some centralized police force that is not likely to know the alleged perpetrators? What do laws and regulations, in other words, have to look like if they are to stand a chance to effectively undermine “the tenacity of the past” (Treisman 2000, p.

438)?

Because of the secretive nature of corruption, these questions are difficult to answer. What one therefore typically sees is a trial-and-error approach to laws and regulations manifested in frequent legal and regulatory revisions through which authorities try to react to legislative and regulatory deficiencies that have become too obvious to ignore (e.g., see the frequent revisions of public procurement law in the Czech Republic). At best this legislative and regulatory process is informed by intuition and, maybe, examples from other countries.

Laboratory experiments to the rescue?

Laboratory experiments (from here on simply experiments) have been used increasingly, and successfully, as economists’ method of choice to understand a plethora of design and implementation problems: Experiments have, for example, been used to fine-tune auction mechanisms for spectrum auctions (Milgrom 2004; Klemperer 2004; Plott 1997; Plott & Salmon 2004) and matching mechanisms in a variety of labor markets (Roth 2002).

Experiments allow us to control the behavior of subjects in ways that are typically not possible in the field. More importantly, experiments allow us to systematically manipulate the environment and the resulting behavior changes and hence to address the issue of causality in ways typically not possible in field contexts. It is simply less expensive to test alternative institutional arrangements (e.g., subtle differences in auction procedures for public procurement projects) in the experimental laboratory than in the laboratory of real life.

For these reasons, and because of the undeniable success of experiments elsewhere, experiments on corruption, corruptibility, and measures to fight them, seem prima facie self-suggesting. Interestingly, until a few years ago there were no such experiments. In fact, Dusek et al. (2005) reviewed the universe of such experiments which at the time of the writing of their article amounted to about a dozen. Before we discuss why experiments on corruption and corruptibility are rare, and what the future of such experiments might be, let me illustrate, and contextualize, the extant experiments by way of a few select examples.

A brief review of experiments on corruption and corruptibility

Dusek et al. (2005) categorize experiments on corruption and corruptibility, as those involving bilateral settings and unilateral settings. Corruption is, of course, almost by definition a three-player game involving a briber (the principal), a bribee (the agent, typically assumed to be some public official), and a third party (possibly, society), that is damaged by the bribe. That third party is, however, typically not an active player. Rather, it is a party negatively affected by the actions of the public official. If, for now, we ignore the welfare-reducing externalities imposed on the third party, we can analyze corruption as a principalagent game that is problem-isomorph to “gift-exchange” games, or trust games, widely studied in the literature (e.g., Kreps 1990; Fehr & Schmidt 1999; Bolton & Ockenfels 2000; Charness & Rabin 2003; Cox 2004; for a critical review see Gueth & Ortmann 2006). The essence of such games is the interaction between two players, a principal and an agent each of which can engage in one of each actions. The principal can (not) trust that the agent will do what he promises to do (e.g., engage in the efficiency-enhancing action, that in the context we deal with here would be that an illegal action that promises benefits). A possible parameterization for such a principal-agent interaction is shown in Figure 1.2 The numbers in this payoff table denote utility or monetary units. Each pair of numbers is an ordered pair, with the first stating the payoff of the Row player (here the Agent) and the second Figure 1 Principal

– &nbsp– &nbsp–

According to standard game theory (as canonized in prominent graduate textbooks such as Mas-Colell, Whinston & Green 1995), the likely outcome of such a game – played once – would be the undesirable outcome in the lower right corner. Note that game theory therefore predicts that acts of corruption are not likely to happen in situations that are modeled correctly by Figure 1. In essence, the outcome in the upper let corner requires trust (on the part of the briber, or principal) and reciprocity (on the part of the bribee, or agent) both of which are at odds with the assumptions of self-interest and rationality. That said, Figure 1 also suggests that repetition of such a game – supported by the reputational concerns on the part of the agent -- might very well bring about the desirable (from the point of view of the principal and the agent) outcome in the upper left corner. And indeed, that is by and large the game-theoretic prediction for scenarios where agents repeatedly interact.3 The game-theoretic predictions for trust games of various forms have been tested experimentally4 (Fehr, Kirchsteiger & Riedl 1993; Fehr, Gaechter & stating the payoff the payoff of the Column player. For example, the action combination {Not, Not} leads to a payoff of zero for both participants reflecting in the current context that the efficiency enhancing action combination did not take place. In contrast, the efficiency-enhancing action combination {Do what promised, Trust) leads to a payoff of “1” for each of the two participants.

The problem with that outcome is that -- while it is efficiency-enhancing -- it is not incentivecompatible: If the Principal were to trust, the Agent would have an incentive not to do what she promised, as she could clearly make herself better of not doing what she promised. (The Principal, according to standard game theory, would anticipate this reasoning and therefore not trust to start with; hence the game-theoretically predicted outcome, or “Nash equilibrium would be the action combination {Not, Not}.

Game-theoretically, one distinguishes between one-shot and finitely repeated games (which have the same outcome prediction) on the one hand and indefinitely repeated games on the other hand; in the current context we mean the latter. In essence, a game that is (indefinitely) repeated allows the players to capture repeatedly the payoffs in the upper left corner. For all reasonable discount factors, the sum total of these payoffs is larger than the occasional deviation payoff for the agent in the lower left corner.

An economics experiment typically takes place as follows: Potential participants are invited to come to a location such as a classroom or a (dedicated) computer lab. At that point they know relatively little about the experiment other than that is an economics experiment (a very important piece of knowledge, see Hertwig & Ortmann 2001) and how much they can expect to earn. Once they are in the classroom or computer lab, they read (or, are being read) the instructions, and then – typically mediated through a computer program – make the kind of decisions described the Figure 1, or the various experiments described here. It is a key feature of an economics experiment that subjects do not answer hypothetical questions but that their decisions matter to them financially. (In most experiments subjects earn, on average, about 2 – 4 times minimum wage.) Kirchsteiger 1997; Engelmann & Strobel 2004; Berg, Dickhaut & McCabe 1995;

Ortmann, Fitzgerald & Boeing 2000; Cox 2004; for a critical review see Gueth & Ortmann 2006), and – while there is some dispute about what the experimental data really show (e.g., Rigdon 2000; Dittrich & Ziegelmeyer 2005; List forthcoming; see also Gueth & Ortmann 2006) – there seems wide agreement that – in one-shot situations -- trust and reciprocity bring about the outcome that’s desirable from the perspective of the players more often than economic theory would have it.

Building on the basic paradigm of the trust game, Abbink, Irlenbusch, & Renner [AIR] (2000) tested experimentally a “moonlighting game” with legally unenforceable types of contracts: A principal “hires” a moonlighter (the agent) to perform some task; he also provides the resources.

The moonlighter can either steal the resources or perform the task, thus generating an economic surplus (“efficiency gains”) which the agent can either share with the principal, or which he can pocket. In analogy to trust games, efficiency gains of that kind require a (non-binding) agreement to generate an economic surplus, and hence trust (on the part of the principal) and reciprocity (on the part of the agent). The novel feature in the game that AIR tested was an appended stage in which the moonlighter faced (non-rational) retribution if he did not reciprocate the trust. Game-theoretically, the appended retribution stage was constructed so as to not make a difference. Specifically, since retribution would be costly and bring about no direct benefit – telling the authorities that you had engaged in illegal activities would come at a net cost – a rational principal would not engage in it. The experimental results showed that this prediction was falsified. Summarizing very crudely, hostile actions were consistently punished (retribution!) while the friendly ones were less consistently rewarded (little reciprocity!)5 In a follow-up article (AIR, 2002), the same authors experimentally tested a bribery game. In the baseline or “pure reciprocity” treatment, the briber proposed to the bribee a deal. The bribee could decide whether to accept or reject the deal.

Materials of this site are available for review, all rights belong to their respective owners.
If you do not agree with the fact that your material is placed on this site, please, email us, we will within 1-2 business days delete him.