When I start discussing evaluations with government partners, and note the need for us to follow and survey over time a control group who did not get the program, one of the first questions I always get is “Won’t it be really hard to get them to respond?”. I often answer with reference to a couple of case examples from my own work, but now have a new answer courtesy of a new paper on testing for attrition bias in experiments by Dalia Ghanem, Sarojini Hirshleifer and Karen Ortiz-Becerra.

As part of the paper, they conduct a systematic review of field experiments with baseline data published in the top 5 economics journals plus the AEJ Applied, EJ, ReStat, and JDE over the years 2009 to 2015”, covering 84 journal articles. They note that attrition is a common problem, with 43% of these experiments having attrition rates over 15% and 68% having attrition rates over 5%. The paper then has discussion over what the appropriate tests should be to figure out whether this is a problem. But I wanted to highlight this panel from Figure 1 in their paper, which plots the absolute value of the difference in attrition rates by treatment and control. They note “64% have a differential rate that is less than 2 percentage points, and only 10% have a differential attrition rate that is greater than 5 percentage points.” That is, attrition rates aren’t much different for the control group.

Here is a familiar scenario for those running field experiments: You’re conducting a study with a treatment and a comparison arm and measuring your main outcomes with surveys and/or biomarker data collection, meaning that you need to contact the subjects (unlike, say, using administrative data tied to their national identity numbers) – preferably in person. You know that you will, inevitably, lose some subjects from both groups to follow-up: they will have moved, be temporarily away, refuse to answer, died, etc. In some of these cases there is nothing more you can do, but in others you can try harder: you can wait for them to come back and revisit; you can try to track them to their new location, etc. You can do this at different intensities (try really hard or not so much), different boundaries (for everyone in the study district, region, or country, but not for those farther away), and different samples (for everyone or for a random sub-sample).

Question: suppose that you decide that you have the budget to do everything you can to find those not interviewed during the first pass through the study areas (doesn’t matter if you have enough budget for a randomly chosen sub-sample or everyone), i.e. an intense tracking exercise to reduce the rate of attrition. In addition to everything else you can do to track subjects from both groups, you have a tool that you can use for those only in the treatment arm (say, your treatment was group-based therapy for teen mums and you think that the mentors for these groups may have key contact information for subjects who moved in the treatment group. There were no placebo groups in control, i.e. no counterpart mentors). Do you use this source to track subjects – even if it is only available for the treatment group?

I have just finished writing up and expanding my recent policy talk on active labor market policies (ALMPs) into a research paper (ungated version) which provides a critical overview of impact evaluations on this topic. While my talk focused more on summarizing a lot of my own work on this topic, for this review paper I looked a lot more into the growing number of randomized experiments evaluating these policies in developing countries. Much of this literature is very new: out of the 24 RCTs I summarize results from in several tables, 16 were published in 2015 or later, and only one before 2011.

I focus on three main types of ALMPs: vocational training programs, wage subsidies, and job search assistance services like screening and matching. I’ll summarize a few findings and implications for evaluations that might be of most interest to our blog readers – the paper then, of course, provides a lot more detail and discusses more some of the implications for policy and for other types of ALMPs.

Surveys are expensive. And, in sub-Saharan Africa in particular, a big part of that cost is logistics – fuel, car-hire and the like. So with the increasing mobile phone coverage more folks are thinking about, and actually using, phones in lieu of in person interviews to complete surveys. The question is: what does that do to data quality?

This list is a companion to our curated list on technical topics. It puts together our posts on issues of measurement, survey design, sampling, survey checks, managing survey teams, reducing attrition, and all the behind-the-scenes work needed to get the data needed for impact evaluations. updated through October 23, 2018.Measurement

On Friday I linked to a description of the survey procedures for an opinion poll in Cuba. This contained the description “At least three attempts were made to reach the selected individual, after which interviewers moved to the next house”.

Attrition is a bugbear for most impact evaluations, and can cause even the best designed experiments to be subject to potential bias. In a new paper, Luc Behaghel, Bruno Crépon, Marc Gurgand and Thomas Le Barbanchon describe a clever new way to deal with this problem using information on the number of attempts it takes to get someone to respond to a survey.

David’s post yesterday on migration got me thinking about the general problem of finding folks when you go back for the second round (or higher) of a panel survey. An interesting and extremely useful paper by Firman Witoelar on the LSMS-ISA