The ratings game: Do publicized teacher ratings have any value?

The city has been abuzz this past week about the ratings for 12,000 public school teachers released last Friday. A big reason for the high-volume ruckus, of course, is that making the Teacher Data Reports public was the focus of a long-running battle between the United Federation of Teachers and its many political allies and the Bloomberg administration.

Mayor Michael Bloomberg and members of his administration insist the ratings are an objective measure and provide invaluable information to parents about the quality of the teachers in their children’s classrooms. It points to the correlation of the ratings to other evaluations of teachers, such as those made by principals and other administrators.

Proponents of the system insist that objective analysis, complete with data, is necessary to rid the public school system of poor teachers. Many bad teachers have managed to stay in the system and keep earning teachers’ pay for years despite their deficiencies because they’ve been protected by the union.

The UFT, which lost a court battle to keep the ratings secret, downplays the phenomenon and stresses that the ratings are a deeply flawed way of evaluating teachers and that people who are not educators “may get the wrong idea,” in the words of United Federation of Teachers Staten Island representative Emil Pietromonaco.

We have to wonder what yardstick, if any, the UFT would agree to use to measure its members’ effectiveness. Teachers evaluating other teachers (glowingly, no doubt) might satisfy the union, but it’s clearly not truly helpful.

And the union’s insistence on keeping the ratings under wraps makes it look as if the UFT doesn’t want inferior teachers exposed. People who have watched the union fight tooth and nail to keep the ratings secret have cause to wonder what there is to hide.

That being said, however, we share the union’s concern about the ratings, if not for exactly the same reasons.

Mr. Pietromonaco is right to worry about how parents will interpret the ratings, which have the patina of being hard information but not necessarily the heft. Will they overreact or ignore the ratings in favor of keeping their kids in their neighborhood schools?

Could this sudden flood of seemingly important data precipitate an exodus of students from schools where the teachers got relatively low ratings to schools where more teachers earned high marks because parents jumped to conclusions? (It appears there are some forces in the city who’d love to see just such an upheaval.)

That’s the point: What do the ratings really mean? Some 18,000 math and English public school teachers from fourth through eighth grades were tracked over a three-year period, from 2007 to 2010. Are the ratings still relevant in 2012?

And what of the 6,000 teachers whose ratings were not made public?

This seems to be yet another reflection of the city Department of Education’s obsession with quantifying the educational process, often in contrived and distortive ways. It’s the same obsession that led the DOE to award letter grades to individual schools, as if schools were not a function of all the people inside them, including the students and their pertinent backgrounds. The DOE even takes that folly a step further and closes schools that get “F” grades.

So we have the absurdity of PS 14 in Stapleton being ordered closed because its grade fell to “F.” But PS 14 got a grade of “A” as recently as 2009. And that absurdity was compounded by the DOE’s plan to put a “new school,” PS 78, in the exact same building in the exact same community with many of the same staff members.

Directly evaluating individual teachers makes somewhat more sense perhaps, but if what goes into their ratings is not fully understood by the public, the ratings - even if they are current, which these are not - are all but worthless and potentially misleading.

Even schools Chancellor Dennis Walcott has questioned the value of the evaluations and cautioned parents not to make too much of them.

So the fundamental question in this controversy is: What purpose is served by making the ratings public if the public is unable, by and large, to put them to good use?

The administration’s defenders like to crow about accountability in the private sector. But in the private sector, employee evaluations are not published in the newspapers or on the Internet. They are the subject of a private discussion between the employees and their superiors. Public employees are entitled to that level of discretion. (Should the evaluations of police and firefighters be made public?)

The Bloomberg administration may believe that it has won a big victory in its ongoing war with the UFT and the education lobby in releasing these ratings. But while we certainly understand the genesis of that war and that the unions often earn their enemies’ criticism, we think that this issue should be viewed only in terms of the public good.

And on balance, we don’t see how making ratings of dubious provenance fodder for uninformed discussion is in the public interest.