The article makes a case for the implementation of educator evaluation that uses statistical modeling such as value added. But Dynarksi argues that teachers will hesitate to endorse such systems until they fully understand how they work.
“A stronger scientific basis for evaluating teachers is a good thing, but the scientific community needs to work with policymakers to inform teachers how the systems work, and to identify what these systems can and cannot do. Changing how millions of teachers are evaluated is a big thing, and explaining it seems like due process,” Dynarski writes. “I don’t think we should skip measuring teacher effectiveness, but I think we can explain it better.”

Few states or organizations are currently doing a good enough job of explaining how their systems work, But Dynarski mentions VARC as an exception. VARC'’s Oak Tree Analogy provides “accessible and useful information...illustrating how statistical models remove background factors from measures of teacher effectiveness,” according to Dynarski.