Make teacher evaluations fair

Before trying to gauge the state's progress in developing a successful teacher evaluation program, it's important to remember the state's core intentions behind this process.

Gov. Chris Christie doesn't much like teachers. He hates the teacher's union for sure, and he has tried to characterize his assaults on public education as somehow targeting only union leaders.

Nice try, but the end result of all of the rhetoric is to have diminished the public perception of teachers. He has relentlessly complained about educational failings in the state, even though those "failings" exist almost solely in poor urban areas in which students face many daunting societal challenges outside the classroom.

It's all part of Christie's conservative preference for more private, for-profit and charter-school options, and the road to expanding such options includes trashing the existing public-school system.

So how do those marching orders translate into the evaluation process, now in use statewide? Is the state merely trying to get rid of the worst teachers and help others improve, as supporters argue? Or do they want a lot of scalps to prove a point?

We've agreed with Christie from the start that some version of tenure reform that included a more meaningful evaluation process for teachers was appropriate. It didn't make sense that teachers had, in effect, virtual lifetime jobs after three years, and tenure protections so strong that it was all but impossible for districts to remove even the poorest of performers. It is also reasonable to expect teachers to be evaluated more thoroughly and productively. So we're on board with the concept.

The trick, of course, is how to do it all fairly. Teaching is more an art than a science, making any attempts to judge performance primarily through statistical analysis impossible. Yet critics of the old system say that data such as test scores have to be part of the process.

What we have developing, therefore, is a complex, hydra-headed program that tries to blend statistics and observations in a way designed to spit out a final judgment placing teachers in one of four categories of effectiveness. Those scoring on the low end face the loss of tenure without improvement.

A state report earlier this month on the results of a two-year pilot program revealed a wide range of opinions on evaluation methods that have already prompted changes in the program. Districts are granted some flexibility in implementing their own approach, and there remains a great deal of uncertainty regarding the logistics of evaluating so many teachers through multiple observations and other criteria. The pilot districts also hadn't yet produced enough data to properly assess the effects of the "growth" component tracking individual student improvement in test scores, a particular sticking point among teachers.

There are, in other words, countless moving parts at this stage, making it unclear what we're supposed to take away from some of the initial findings. The report on the pilot program resulted in more than a quarter of the teachers rated only partially effective or lower. Some observers emphasized that negative spin, while others chose the more positive viewpoint that about 75 percent of teachers were considered effective or better.

But what does any of it really mean? Should we mostly be comforted by 75 percent good teachers or more troubled by the 25 percent not-so-good? And are those figures the result of an honest assessment of each teacher's performance, or does the state want to grade on a curve, assuring a specific distribution of teacher ratings?

It is troubling that within the report, authors note that teachers must be prepared for generally receiving lower marks for effectiveness under the new rules. Evaluations under the old system were more perfunctory, but if administrators are wading into the process with a presumption that more scrutiny will equal poorer ratings - or worse, if they are operating under a mandate to assure substantial numbers of low marks, or if criteria are being adjusted to guarantee a certain percentage of less effective ratings - then the process has been tainted from the start.

The state has made a point of involving educators and other stakeholders in developing the evaluation system in hopes of alleviating some of the concerns of skeptical teachers. But the mere existence of those contributions doesn't preclude a stacked deck, if those pulling most of the strings have the wrong agenda in mind.

Moving forward, it's important that everyone involved in refining the evaluation system focus on what should be the overriding task at hand: Evaluating each teacher fairly.

The process cannot be ruled by a desire to validate the Christie administration view of failing schools. That's among the main concerns of teachers, and they have good reason to worry.

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

Email this article

Make teacher evaluations fair

Before trying to gauge the state's progress in developing a successful teacher evaluation program, it's important to remember the state's core intentions behind this process.

A link to this page will be included in your message.

Real Deals

Sales, coupons, circulars and more from your favorite Morris County area retailers.