School inspectors?

Here’s an interesting contribution from an education reformer who thinks more accountability can make – indeed has made – a big difference, but who also worries about a narrow focus on testing. As Fordham Institute analyst (and Executive Vice President) Michael Petrilli explains,

When it comes to evaluating teachers, there’s wide agreement that we need to look at student achievement results—but not exclusively. Teaching is a very human act; evaluating good teaching takes human judgment—and the teacher’s role in the school’s life, and her students’ lives, goes beyond measurable academic gains.

So why do we assume, when it comes to evaluating schools, that we must look at numbers alone? Sure, there have been calls to build additional indicators, beyond test scores, into school grading systems. These might include graduation rates, student or teacher attendance rates, results from student surveys, AP course-taking or exam-passing rates, etc. Our own recent paper on model state accountability systems offers quite a few ideas along these lines. This is all well and good.

But it’s not enough. It still assumes that we can take discrete bits of data and spit out a credible assessment of organizations as complex as schools. That’s not the way it works in businesses, famous for their “bottom lines.” Fund managers don’t just look at the profit and loss statements for the companies in which they invest. They send analysts to go visit with the team, hear about their strategy, kick the tires, talk to insiders, find out what’s really going on. Their assessment starts with the numbers, but it doesn’t end there.

So it should be with school accountability systems. The best ones today take various data points and turn them into user-friendly letter grades, easily understandable by educators, parents, and taxpayers alike. So far so good. Why not add a human component to the process, via school inspectors like those in England? (See this excellent Education Sector paper, by my friend Craig Jerald, for background on how that works.)

I find this intriguing, since it combines the advantage of third party assessment (one of the strengths of Advanced Placement exams) with the advantage of moving beyond mere numbers. If the inspectors’ “grades” gained credibility with teachers, parents, and administrators – and of course this would only happen if we didn’t end up with Lake Wobegon schools, every one of them above average – they might give us more useful, and nuanced, information about how schools actually perform.

What do you think? By the way, I recommend following the link to the report on state accountability efforts.