Monday, February 25, 2013

One Useful Point from the MET Study

While the mean score was higher on the days that the teachers chose to submit, once you corrected for measurement error, a teacher’s score on their chosen videos and on their unchosen videos were correlated at 1. They were perfectly correlated. The people who struggled on the lessons they’re willing to submit are also the people struggling on the lessons they didn’t submit. The best lesson from the best teacher is that much higher than the best lesson from the worst teacher. The order is preserved even if the mean rises.

That has huge implications, because it means that the element of surprise may not be that important. That contributes a huge degree of anxiety, to have observers be able to pop in whenever they like. Give them a camera and say, “Submit four to five lessons you’re particularly proud of.” I think that would remove some of the anxiety that made this hard, and in the process would have all sorts of other benefits. It would allow principals to time-shift. It would make it easier to get people outside the school involved in education. D.C. spends a lot to get those master educators to drive around to schools. If you could do this video-based thing, and still have them sit down with a teacher one on one to discuss these three or four lessons they submitted, rather than go out there physically, I just think it’d be a more efficient way to do it.

Less expensive, less anxiety, same results. What's not to like? The next thing they're going to discover is that if scores are stable over time you don't need to evaluate veteran teachers every year.