We estimate that the bias associated with using Regression Discontinuity (RD) is less than 0.01 standard deviations on average suggesting that the method has high internal validity.

We estimate that the bias associated with using RD stays below 0.07 standard deviations when looking at shrunken results by study suggesting that it may have good external validity.

We find substantial variation in bias when we look at individual impact estimates from RD studies, suggesting that authors should be cautious when interpreting individual impact estimates based on RD.

We find some evidence favoring non-parametric RD methods.

Theory predicts that regression discontinuity (RD) provides valid causal inference at the cutoff score that determines treatment assignment. One purpose of this paper is to test RD’s internal validity across 15 studies. Each of them assesses the correspondence between causal estimates from an RD study and a randomized control trial (RCT) when the estimates are made at the same cutoff point where they should not differ asymptotically. However, statistical error, imperfect design implementation, and a plethora of different possible analysis options, mean that they might nonetheless differ.We test whether they do, assuming that the bias potential is greater with RDs than RCTs. A second purpose of this paper is to investigate the external validity of RD by exploring how the size of the bias estimates varies across the 15 studies, for they differ in their settings, interventions, analyses, and implementation details. Both Bayesian and frequentist meta-analysis methods show that the RD bias is below 0.01 standard deviations on average, indicating RD’s high internal validity. When the study-specific estimates are shrunken to capitalize on the information the other studies provide, all the RD causal estimates fall within 0.07 standard deviations of their RCT counterparts,now indicating high external validity. With unshrunken estimates, the mean RD bias is still essentially zero, but the distribution of RD bias estimates is less tight, especially with smaller samples and when parametric RD analyses are used.