Using Lagged Outcomes to Evaluate Bias in Value-Added Models

Abstract

Value-added (VA) models measure agents' productivity based on the outcomes they produce. The utility of VA models for performance evaluation depends on the extent to which VA estimates are biased by selection. One common method of evaluating bias in VA is to test for balance in lagged values of the outcome. We show that such balance tests do not yield robust information about bias in value-added models using Monte Carlo simulations. Even unbiased VA estimates can be correlated with lagged outcomes. More generally, tests using lagged outcomes are uninformative about the degree of bias in misspecified VA models. The source of these results is that VA is itself estimated using historical data, leading to non-transparent correlations between VA and lagged outcomes.