Tuesday, August 6, 2013

African Dictator Meets the White Man

As a development economist, I read (or am occasionally forced to read) about experiments, especially the field and quasi-field experiments that are carried out in developing countries to test this, that or the other hypothesis. The idea is to randomize the subjects into two groups. In one group you do something to them: give them cash transfers conditional on health checkups, or mosquito nets, or subject them to various ordeals in order to claim a poverty line handout, or provide loans, or information about jobs. This is the treatment group. In the other group you typically do nothing. This is the control group. The idea is that treatment and control groups are statistically similar to start with, so that if there is a differential change in behavior following the intervention, you must perforce chalk it down to the intervention.

The idea of treatment versus control groups is, of course, an old one and it's been around in medical research since I don't know when. But in development economics it's all the rage now. That's not a big deal: a lot of science is just a sequence of reinvented wheels.

Just as in medicine, though, there are obvious limits to the efficacy of this procedure. If you work in the area, you'll know about most of them, but seeing as this is a blog for all sorts of readers, here are a few common gripes:

1. You can't do some interventions. Just as you can't create a treatment group of smokers to test the "efficacy" of cigarettes in knocking you dead, you can't study, say, ethnic violence by requesting a treatment group to engage in conflict, while equally politely asking the control group to desist. So there are some experiments you can't run, and this can distort the questions you ask.

2. Interventions usually require cooperation: of this educational NGO or that microfinance group. But there is some serious self-selection here: would you expect a badly functioning NGO to accede to evaluation by a randomized trial? I don't think so.

3. You have no clue whether the results you get will apply --- not qualitatively and certainly not in quantitative magnitude --- to the next group you study. In econometric jargon, this is called the problem of external validity: the applicability of your results to the world at large. If there is a frontier that trades off "internal validity" for "external validity," these randomized experiments would appear to occupy a corner of that frontier.

4. The limited questions that can be asked means that the questions are usually boring with an answer that most of can guess beforehand. What we can't guess is how quantitatively strong the answer is, so that's useful I suppose, subject to the external validity concerns.

But hey, it takes all sorts and I am all for randomized experiments, just as long as I don't have to do them. I'm a consumer of this stuff.

Points 1--4 above are pretty well known, but I wanted to tell you about something else (Point 5!) that recently caught my fancy. Here's a funny story that bears on experiments in developing countries.

(What role does the partner play? Nothing, so in a vacuum the dictator game looks silly, but it is actually a variant of the equally famous ultimatum game, which looks just like the dictator game, but person B is allowed to agree or disagree with the proposed allocation; with disagreement resulting in zero payoff to both parties. Taken together, the two games help in disentangling just how much the allocation is affected by the generosity of the giver, or the anticipated reactions of the receiver.)

Dictator games have been organized by experimental economists all over the world: Africa is a particularly fertile playground, with a veritable army of experimentalists deploying these and other games to the "natives" in the "field" (the last word being a special favorite of these academics). The diagram to the left is taken from the paper, and shows the happy proliferation of dictatorship games in Africa.

What the "natives" make of it God only knows, though this paper gives us a clue. The authors also run dictator games, a whole lot of them --- all in Sierra Leone --- but the games themselves are only part of the experiment in their paper. The real experiment is that a sub-group of games --- the treatment group --- is given the White Man Treatment: a white "supervisor" oversees the game, a white American in fact. In the control group, the supervisor is a Sierra
Leonean from the capital city Freetown. In other respects the supervisors are deliberately chosen to be similar: they're all male and college-educated, and (according to the authors), have "friendly demeanors". ("Hi! Welcome! You get to be a dictator!")

Here's the amazing thing: when the foreigner oversees the game, average dictator generosity rises by 19%.

What's going on here? Is the white guy a sign that some serious foreign aid might be on the way, and that these games represent a preliminary evaluation of village needs? That's possible, but which way would that effect run? Might the person playing dictator signal his own neediness by giving away less, or might he signal his compatriot's neediness by giving him more? Unclear, but what is clear that in the villages with greater exposure to past aid the white-man effect is actually weaker.

So far, then, this is consistent with two lines of explanation. One, that signaling of need occurs via generosity to compatriots, but villages with greater aid exposure know that these organized games have nothing to do with foreign aid (and only to do with publishing another paper), and so tone down the signaling. Or two, that signaling of need occurs via stinginess, which is why the villages with high exposure give only marginally more in the presence of a white supervisor. The second explanation would require the exposed villages to believe more strongly that the white-man-supervised games are connected to aid. And indeed, this is what the authors find in a separate questionnaire that they administer to the participants: "players from villages with greater aid exposure are also more inclined to
believe that the behavioral games were conducted to test their community’s suitability for
aid disbursement."

That shuts the lid on one sort of Pandora's Box, but unlocks another: if the exposed villages signal need by toning down their generosity (thus attenuating the white-man effect), why was that white-man effect there in the first place? This isn't so easy to answer. A white man inspiring the native to greater flights of generosity? I can't imagine why, to be honest, though there is evidence that individuals with greater customary standing in the village aren't afflicted by the white-man effect to the same degree. Is it just a foreigner effect: the desire to show to the outsider that we are a generous people? My fellow-Indians and fellow-Americans are constantly thumping their own chests and engaging in such pronouncements (We are a Generous People, until Moved to Anger etc), why shouldn't a Sierra Leonian? It may also be the case that subjects are eager to please the outsider. They might suspect that the outsider wants to disprove the narrow, self-centered proclivities of homo economicus with a dazzling display of behavioral economics in all its magnanimous glory. The possibility that subjects in other societies might simply be taking their observers for a joyride by telling them what they want to know --- and more --- has been raised in other contexts; for instance, see here and here for an account of the Margaret Mead controversy.And is this effect only present for white foreigners, or foreigners of any stripe? Might Oeindrila herself, a brown Bengali-American, elicit the same displays of generosity from the star-struck Sierra Leonians were she to supervise their dictator games? I must remind myself to ask her this question.Don't forget (or if you never knew it, read about) the Hawthorne effect: the very fact that subjects are being observed might change their behavior. If like me you took a quantum mechanics course (readers of my blog will know I did), you'd see this as an analogue of the Heisenberg uncertainty principle. Except that such analogies are the usual crappy parallels that popular scientists like to invest deep meanings in, so I will pass.But this is a pretty cool paper.