4.4.3 Mechanisms

Experiments measure what happened. Mechanisms explain why and how it happened.

The third key idea for moving beyond simple experiments is mechanisms. Mechanisms tell us why or how a treatment caused an effect. The process of searching for mechanisms is also sometimes called looking for intervening variables or mediating variables. Although experiments are good for estimating causal effects, they are often not designed to reveal mechanisms. Digital experiments can help us identify mechanisms in two ways: (1) they enable us to collect more process data and (2) they enable us to test many related treatments.

Because mechanisms are tricky to define formally (Hedström and Ylikoski 2010), I’m going to start with a simple example: limes and scurvy (Gerber and Green 2012). In the eighteenth century, doctors had a pretty good sense that when sailors ate limes, they did not get scurvy. Scurvy is a terrible disease, so this was powerful information. But these doctors did not know why limes prevented scurvy. It was not until 1932, almost 200 years later, that scientists could reliably show that vitamin C was the reason that lime prevented scurvy (Carpenter 1988, 191). In this case, vitamin C is the mechanism through which limes prevent scurvy (figure 4.10). Of course, identifying the mechanism is also very important scientifically—lots of science is about understanding why things happen. Identifying mechanisms is also very important practically. Once we understand why a treatment works, we can potentially develop new treatments that work even better.

Figure 4.10: Limes prevent scurvy and the mechanism is vitamin C.

Unfortunately, isolating mechanisms is very difficult. Unlike limes and scurvy, in many social settings, treatments probably operate through many interrelated pathways. However, in the case of social norms and energy use, researchers have tried to isolate mechanisms by collecting process data and testing related treatments.

One way to test possible mechanisms is by collecting process data about how the treatment impacted possible mechanisms. For example, recall that Allcott (2011) showed that Home Energy Reports caused people to lower their electricity usage. But how did these reports lower electricity usage? What were the mechanisms? In a follow-up study, Allcott and Rogers (2014) partnered with a power company that, through a rebate program, had acquired information about which consumers upgraded their appliances to more energy-efficient models. Allcott and Rogers (2014) found that slightly more people receiving the Home Energy Reports upgraded their appliances. But this difference was so small that it could account for only 2% of the decrease in energy use in the treated households. In other words, appliance upgrades were not the dominant mechanism through which the Home Energy Report decreased electricity consumption.

A second way to study mechanisms is to run experiments with slightly different versions of the treatment. For example, in the experiment of Schultz et al. (2007) and all the subsequent Home Energy Report experiments, participants were provided with a treatment that had two main parts (1) tips about energy savings and (2) information about their energy use relative to their peers (figure 4.6). Thus, it is possible that the energy-saving tips were what caused the change, not the peer information. To assess the possibility that the tips alone might have been sufficient, Ferraro, Miranda, and Price (2011) partnered with a water company near Atlanta, Georgia, and ran a related experiment on water conservation involving about 100,000 households. There were four conditions:

a group that received tips on saving water

a group that received tips on saving water plus a moral appeal to save water

a group that received tips on saving water plus a moral appeal to save water plus information about their water use relative to their peers

a control group

The researchers found that the tips-only treatment had no effect on water usage in the short (one year), medium (two years), and long (three years) term. The tips plus appeal treatment caused participants to decrease water usage, but only in the short-term. Finally, the tips plus appeal plus peer information treatment caused decreased usage in the short, medium, and long term (figure 4.11). These kinds of experiments with unbundled treatments are a good way to figure out which part of the treatment—or which parts together—are the ones that are causing the effect (Gerber and Green 2012, sec. 10.6). For example, the experiment of Ferraro and colleagues shows us that water-saving tips alone are not enough to decrease water usage.

Figure 4.11: Results from Ferraro, Miranda, and Price (2011). Treatments were sent May 21, 2007, and effects were measured during the summers of 2007, 2008, and 2009. By unbundling the treatment, the researchers hoped to develop a better sense of the mechanisms. The tips-only treatment had essentially no effect in the short (one year), medium (two years), and long (three years) term. The tips plus appeal treatment caused participants to decrease water usage, but only in the short term. The advice plus appeal plus peer information treatment caused participants to decrease water usage in the short, medium, and long term. Vertical bars are estimated confidence intervals. See Bernedo, Ferraro, and Price (2014) for actual study materials. Adapted from Ferraro, Miranda, and Price (2011), table 1.

Ideally, one would move beyond the layering of components (tips; tips plus appeal; tips plus appeal plus peer information) to a full factorial design—also sometimes called a \(2^k\) factorial design—where each possible combination of the three elements is tested (table 4.1). By testing every possible combination of components, researchers can fully assess the effect of each component in isolation and in combination. For example, the experiment of Ferraro and colleagues does not reveal whether peer comparison alone would have been sufficient to lead to long-term changes in behavior. In the past, these full factorial designs have been difficult to run because they require a large number of participants and they require researchers to be able to precisely control and deliver a large number of treatments. But, in some situations, the digital age removes these logistical constraints.

Table 4.1: Example of Treatments in a Full Factorial Design with Three Elements: Tips, Appeal, and Peer Information

Treatment

Characteristics

1

Control

2

Tips

3

Appeal

4

Peer Information

5

Tips + appeal

6

Tips + peer information

7

Appeal + peer information

8

Tips + appeal + peer information

In summary, mechanisms—the pathways through which a treatment has an effect—are incredibly important. Digital-age experiments can help researchers learn about mechanisms by (1) collecting process data and (2) enabling full factorial designs. The mechanisms suggested by these approaches can then be tested directly by experiments specifically designed to test mechanisms (Ludwig, Kling, and Mullainathan 2011; Imai, Tingley, and Yamamoto 2013; Pirlott and MacKinnon 2016).

In total, these three concepts—validity, heterogeneity of treatment effects, and mechanisms—provide a powerful set of ideas for designing and interpreting experiments. These concepts help researchers move beyond simple experiments about what “works” to richer experiments that have tighter links to theory, that reveal where and why treatments work, and that might even help researchers design more effective treatments. Given this conceptual background about experiments, I’ll now turn to how you can actually make your experiments happen.