In his post this week on ethical validity in research[1], Martin Ravallion writes:“Scaled-up programs almost never use randomized assignment so the RCT has a different assignment mechanism, and this may be contested ethically even when the full program is fine.”

Lotteries aren’t so exotic
But there are plenty of examples where random assignment is used in programs, not at the bequest of researchers, but because this is seen as a fair way to allocate spaces in a program. This particularly struck me given that this year alone I’ve entered into lotteries to get into the White House Easter Egg Roll, the New York Marathon, the Marine Corps Marathon, and the Cherry Blossom 10 miler (so far I’m 2 for 3). None of these are randomized because a researcher wanted them to be (at least as far as I’m aware), but because this is seen as a fair way of allocating spaces.

Ok, you say, but rolling Easter Eggs and running through large cities aren’t real policies that people care about. Well, I’ve also been through lotteries to get my kids into their language immersion program at school, and years ago tried (unsuccessfully) the US Diversity Visa (Green Card Lottery). So I’ve also participated in more major lotteries.

Indeed lotteries in education seem increasingly common in the U.S., particularly for admission for charter schools. Equity and transparency are key reasons given for this – indeed the DC Public Charter schools run lotteries for admission, with their website[2] saying “To ensure that children in the District of Columbia receive fair and equitable opportunitiesto enroll in and attend public charter schools, the District Of Columbia Public Charter School Board (PCSB) has created enrollment and lottery guidelines.”. Indeed all schools receiving Federal Charter School Program funds must use a lottery if there are more applicants than places. Other examples of lotteries used in the U.S. are (famously) for the Vietnam military draft, for the expansion of Medicaid in Oregon, many colleges use lotteries to assign dorms or roommates, lotteries are used for some national park programs (including a road lottery[3] and an alligator lottery[4]!) –as well as running races, Olympic tickets, Superbowl tickets, etc. all being allocated at least in part by lottery. So the idea that people find lotteries strange and unethical doesn’t concord with their use in a whole range of different scenarios.

But can’t we do better than lotteries?
Martin argues that random assignment may not be the most ethical way to allocate scarce resources when there is excess demand, suggesting that participants instead be chosen in terms of who is likely to benefit most. This is certainly possible in some cases, but I have two concerns with it.

First, I am very skeptical that we know well who is going to benefit most from most programs. The problem is that different theories can often give opposing predictions, and it is only through prior evaluations that we may start to form some sense of who programs might work best for. Consider his example of a training program. I’ve evaluated a number of these programs, and it is never clear whether we should expect it to benefit most those with initially higher levels of skills (based on the idea that training complements existing skills) or those with initially lower levels of skills (based on the idea that it substitutes for existing skills). Similarly it is often easy to come up with theories why it may work better for men than women or vice versa, for poor than rich or for rich than poor, etc. As I’ve discussed elsewhere[5], identifying levels is very different from identifying who will benefit most from a treatment – communities might be quite good at identifying who is poor, but I suspect much less good at identifying who will gain most from receiving some program (it may not be the poorest).

This is where I think some people perhaps read more into this paper by Alatas et al. (2012)[6] that Berk referred[7] to than I do. My understanding is that they show getting communities to pick who is poor does almost as well as using a proxy means test in terms of approximating per capita expenditure, and the process seems to make people happier than using a proxy means test. But while it tells us communities may have information that helps pick who is poor, it tells us nothing about whether communities are good at picking who will benefit most from a program. Even if the program is as simple as giving people cash, it need not be the case that choosing the poorest, most marginalized individuals results in the greatest lasting impact – as seen for example in my work[8] with Suresh de Mel and Chris Woodruff giving cash grants in Sri Lanka.

Second, I am skeptical that the alternative to random assignment in many of the places we work is technocratic assignment based on who will gain most
– in many cases the alternative is selection based on who you know, or first-come first-served (which can benefit those with the lowest transaction costs or largest information networks). Martin asks us to imagine a government telling its underserved citizens that the reason they didn't get roads was because of randomization; this strikes me as better than saying the reason you didn't get them is because another district's congressman sat on the appropriations committee and made sure all the new roads being built ended up in his or her district.

I think this is certainly an area where more research is of interest – I’d like to see more work which compares treatment effects under different assignment rules. I’m sure in some cases one can do better than (conditional) random assignment, but suspect such cases are relatively rare. Just as there have been arguments that cash transfers are a useful benchmark to compare aid programs against; (conditional) random assignment seems like a useful benchmark allocation mechanism, with the onus then on those claiming they can do better to show this is the case.