By CASEY B. MULLIGAN

March 5, 2014

I earn more and have more schooling than my daughter, who has no schooling and no earnings. But no one would conclude from that alone that my schooling is the reason why I earn more than she does.

It’s challenging to measure causal effects when many other variables are at work. Yet randomized field experiments are in vogue among academics in social science and medicine, even though randomization is often times neither the most economical nor the most ethical way to learn how the world works.

An active market in experimental study participation would both reveal some of these weaknesses and improve the quality of the research.

Answers to causal questions are a way to a better life. Does more schooling increase earnings? Does obesity shorten life spans? Does Vitamin C help prevent colds?

A field study, as opposed to a laboratory study, looks at people in their natural setting doing things that they normally do and care about. A randomized field experiment takes the people in the natural setting and randomly determines – with a coin flip, dice role or computer-generated random number – which study participants receive a treatment and which do not.

An alternative approach is to randomly assign study participants to treatment and control groups. The treatment group would be sent to extra schooling; the control group would be prohibited from attending extra school. After the treatment group completes its schooling, its earnings could be measured and compared with the earnings of control group; the measured difference might perhaps be interpreted as the causal effect of schooling on earnings.

These days many academics view the random assignment approach as the
gold standard
for measuring causal effects. Drugs are tested this way under the supervision of the Food and Drug Administration, with some patients randomly chosen to receive a drug and others randomly given a placebo (that is, a fake drug that is known to be of no help). Labor economists eagerly look for opportunities to randomly assign various labor market conditions (e.g., access to health insurance).

Perhaps most famously, development economists – randomistas as
Prof. Angus Deaton calls them
– have randomly assigned economic assistance to poor villages in order to measure the rates of return on that assistance. Prof. Jeffrey Sachs’s Millennium Villages Project, an ambitious effort to help African villages escape poverty, has been criticized
for, among other things, failing to randomly assign
its treatments.

But Professor Sachs didn’t accidentally forget to randomize his assistance. He thinks that it’s wrong to withhold from poor people assistance that he’s confident can help. The patients who get placebos in randomized F.D.A. trials would probably agree.

The problems with randomized trials cannot be dismissed as mere philosophical challenges, because people react to the poor treatment they get from experimenters. Why should a patient agree to let a dice or random number generator decide his fate?

By insisting on randomization, experimenters
have troubles
recruiting study participants, and their reluctance to take part prevents us from learning as much as we could about new treatments (I owe this point to my colleagues Tomas Philipson and Gary Becker).

One approach to this problem is to prevent participants from knowing that they are participating in experiments or that researchers are introducing randomness into their environment, as natural or “unframed” field experiments do (see
this paper
by Omar Al-Ubaydli and John List on the different kinds of experiments in economics). Professor Sachs’s approach is to economize on the randomness.

Randomization should not be a goal in and of itself, but rather one tool among many for learning about how the world works in a way that has less costs and more benefits.

The former chief economist at the F.D.A., Professor Philipson has recommended that people participating in new drug trials
be compensated
for their participation. That practice might appear to make drug trials more expensive, but an active market for study participants would help us quantify some of the costs of randomization and increase the knowledge generated by each study.

If, for example, clinical trial participants really dislike the possibility that they would get a placebo instead of the real thing, they would have to be paid more for studies that give out lots of placebo. If Professor Ziliak and others have exaggerated the costs to participants of randomization, then studies using randomization would not have to pay much extra for participants.

As it stands, most federally funded randomized trials
do not pay participants
for their time and risk-taking. But if experimenters had to pay for the costs they impose on participants, they might be induced to make causal inferences with more thinking and less dice rolling.