Digesting the difficult decisions of development

Menu

Why no trial registry for development interventions?

Chris Blattman, citing a recent paper(ungated here) by Rasmussen, Malcow-Møller and Andersen, is wondering why donors and the big RCT research groups aren’t really pushing for a trial registry.

I think the answer is pretty simple: incentives. Trial registries force us to carefully specify our hypotheses up front. The benefits on the quality of the research are obvious: there’s a lot less data mining (atheoretically scouring the data for correlations), and subsequently, less bogus results.

Yet the amount of viable research decreases significantly once we tie our hands ex-ante. From a purely cost standpoint, you can see why donors wouldn’t be happy, given that they fund a lot of this research. If you’re going to spend $50,000 on an RCT, you want the researcher to come up with as many results as possible, even if these results are `discovered’ later, and are a little bit suspect. An RCT that only answers one question (even if it answers it very, very well) doesn’t appeal to the zealots of value-for-money. Furthermore, to the extent that donors are backing rigorous impact analysis for the purpose of choosing the right tools, more results = more taglines. When the head of DFID defends a new project scaling up a newly proven intervention, he/she wants to be able to say “our project has a proven impact on X, Y and Z”, not just on X. In the quality-quantity trade-off, donors will strictly prefer quantity.

What about the big research houses? Both top-down and bottom-incentives are a problem here. At the end of the day, someone allocates money to these places (a donor, a foundation, etc), and that someone is going to have a similar objective function to the donors: more research per dollar is better. What about individual researchers? At the end of the day, we all have to publish…. and, let’s face it, data mining is easy and fun, if pretty dodgy.

How do we move from the current equilibrium to the one seen in the medical sciences? Maybe the journals need to make the first move on this by making pre-registration a prerequisite. I can see this making a lot of people really unhappy.

Post navigation

4 thoughts on “Why no trial registry for development interventions?”

Ranil Dissanayake

June 7, 2011 at 10:27am

Matt, question:

is all ‘data mining’ illegitimate? What if you’re examining data, see a correlation, investigate and then find out there’s a plausible (though not necessarily directly verifiable) causal mechanism that explains it?

I also get irritated by selection bias in publication and am generally healthily skeptical of a lot of data analysis, but in defence of economics, other disciplines routinely do what could be described as data mining. Take historians: we look at what has happened and try to explain. This has value, too.

Matt

June 7, 2011 at 10:37am

Ranil, answer:

No, not quite – there are degrees. We need data to inform theory – you can’t start in a complete vacuum (although some like to pretend like they do). But ideally there is a back and forth process – theory informs a design, you notice some results which you didn’t plan for, so you go back and try and determine whether or not you can re-verify those results, but with a new design (not just by vaguely referring to theory). Many researchers stop half way, and publish.

Using the more precise definition of data mining, yes it can lead to some very iffy results. I’ll refer you back to this XKCD comic: http://aidthoughts.org/?p=2309 . Just through pure probability, you can come up with a results the more you dig around in the data – and these results are more likely to have come about by pure chance, rather than any underlying mechanism. This pretty much sums up all aid/growth regressions.

Historical analysis can have the same problem, (explaining away possibly random events) – I think this is more likely the more specific the event that is being studied (i.e. decisions during crises, where agents may have been following specific strategies, or may have just chosen one course of action through chance). Good historians can get around these problems, but they aren’t immune.

Easy solution: one or two of the most highly regarded journals put such a system in place. They either:
(a) require all new papers to register in advance, or
(b) clearly label those that don’t.
Probably (b) would be better to ensure that ‘good’ data mining is still allowed and published. Other journals may follow suit, or may perhaps devise their own requirements. One monolithic system may not be the ideal solution given the breadth of economic studies.