Through the lens of social science, eduwonkette takes a serious, if sometimes irreverent, look at some of the most contentious education policy debates. (Find eduwonkette's complete archives prior to Jan. 6, 2008 here.)

NYC's Trojan Horse

skoolboy has absolutely nothing of substance to say about Education Secretary nominee Arne Duncan, whom he has met exactly once. But he continues to mouth off about New York City's Teacher Data Reports, the NYC Department of Education's version of value-added assessment. Which are not to be used to evaluate teacher performance. But rather for instructional improvement. Excuse me, skoolboy has something in his eye.

It's hard not to view these Teacher Data Reports as a Trojan Horse. Just how is a tool that is designed for capacity-sorting supposed to function for capacity-building? After all, a teacher value-added measure might tell us something useful about which teachers are more or less successful in raising their students' test scores, but it tells us nothing about the specific instructional practices that account for their relative success.

How are Teacher Data Reports supposed to improve instruction? In her videotaped comments to teachers, Amy McIntosh, the Chief Talent Officer at NYC's Department of Education, says, "These reports will provide information that will help teachers and school leaders gain insights about important aspects of a teacher's practice ... Whether individual teachers have a greater influence on the learning of some groups of students than on others ... Finally, we can see what teachers might benefit from development focused on, say, the needs of English language learners, and which teachers might be best positioned to lead that kind of professional development ... We also think they will ... help you think about how you can share the techniques you use with your colleagues in your school or across the city."

Hmm. So the specific strategies for improving teaching practice are what, exactly? Having more successful teachers lead the professional development of less successful teachers? Expert practitioners don't always make expert coaches. Hall-of-Fame pro basketball player Isiah Thomas--unquestioned as one of the best point guards of all time--was a mediocre coach for the Indiana Pacers and New York Knicks.

Here's why. Teaching is an extraordinarily complex activity, with teachers making thousands of decisions in the course of their work. Successful teachers make many good decisions and some bad decisions, whereas less successful teachers make many bad decisions and some good decisions. But the capacity to reflect on one's practice and figure out which of those decisions are good and which are bad is exceedingly rare, as is the capacity to share this knowledge with others. In the absence of this reflective capacity, we're all prone to attribute our successes and failures to our pet theories, which may or may not be correct. A Teacher Data Report that provides reassurance that a teacher is successful will only solidify and reinforce a personal folk theory about the reasons for that success.

Yet the Teacher Data Report provides no evidence whatsoever about why a teacher is successful--the many daily practices that promote student learning. And if a teacher's personal theory is inaccurate, then sharing it with others will not improve instruction, nor student achievement. It could even make things worse, focusing attention on ineffective practices. A tool like the Teacher Data Report that claims to be useful for increasing teachers' capacity to teach students effectively, but instead is only useful for ranking teachers on their effectiveness, is a modern-day Trojan Horse.

Categories:

5 Comments

For better or worse the best PD has always been given by teachers. What gripes me is that administration always starts up some new PD initiative by blowing their own horn about how great the PD is, but it's the teachers who provide the materials and do all the presentation work. If you actually log on to ARIS or ACUITY, you'll find a structure in place but no content. There is no "there" there. When I logged on to see the test scores and transcripts of my students I had a few minutes of "gee, that's interesting," but nothing concrete to use other than what I'm already using.

"But the capacity to reflect on one's practice and figure out which of those decisions are good and which are bad is exceedingly rare, as is the capacity to share this knowledge with others."

I find this statement to be exceptionally inaccurate and uninformed. Schools that are premised upon developing a strong and successful learning community engage in critical reflection on a regular basis. To think that teachers are so overwhelmed by the complexity of their day as to overshadow the ability to critically assess their practice (using a variety of measures, including their consistent use of formative assessment and data collection) is preposterous.

Effective teachers are well aware of their weaknesses and strengths, and work relentlessly to fortify their instructional practices. That's an inherent and integral responsibility of our job.

I believe you are taking the DOE's comments well beyond their intent. They are implicitly suggestive as to the role these reports may have, not mandates.

Yes, the Trojan Horse has arrived. But some of us do (even after a day of making thousands of decisions!) have the common sense and good fortune to understand the statistical limitations of such measures, as well as the beneficial information they may provide.

Should we designate an instructional leader on the basis of a single instrument? Of course not. But such a measurement (if consistent over a longitudinal period of time), combined with a variety of other evaluative tools, may help to identify those teachers who are positively impacting student achievement, and subsequently, the instructional practices they implement to generate such results. If such a teacher can effectively provide instructional support to their peers, all the better.

The effective writer, audience member and respondent today is expected to first commend the other party on a very interesting post and set of challenges.

There it is. Record my points.

Skoolboy is too hard on what may be a genuine Quality Control programs or valuable parts of one.
The Montgomery County (MD) Education Association -operates a "Peer Assistance & Review" (PAR) programs staffed on two year full rotations by trained peer evaluators on two year full-time rotations, offering a second, independent opinion to the administrators'. (www.mcea.nea.org)
Who knows whether it is a detailed, clinical, and articulate analysis, as aided by videotape and other technology that increases effectiveness, more than the support given by such programs? This skeptical numbers cruncher was impressed by a 20- minute presentation on PAR, one that met a Randi Weingarten nod of respect and approval. It exceeded my expectations of quality control and improvement in teaching.

Now, whether the MCEA (or the likes of the Gates Foundation) has an interest in coding the performances of teachers before and after peer assistance, and then finding improvement in (the crude and limited measures of) learning is something else. Maybe some teachers will keep their jobs after learning how to squelch a tic or habit that bothered an average of one child per year and stirred a parental complaint to the principal.

It's uncharacteristic for a skeptical numbers cruncher such as you to tout a program that appears to have no evaluative evidence about its effects. In any event, I wasn't talking about Montgomery County's Peer Assistance and Review program. Nor am I suggesting that a well-designed system of teacher peer coaching will be unsuccessful. What I am saying is that an initiative whose fundamental idea is "share what you think is working in your classroom with other teachers" is not a well-thought-out program of professional development.

The best PD I've ever had is from ASCD and is taught by current classroom teachers.

As exhausted as I may be by my "thousands of decisions" (could ya be any more condescending?), one would think self-preservation would dictate a bit of reflection on which of those decisions worked out OK.

Feel-good-let's-share-what-works! PD is not effective. But when a group of teachers is deliberately investigating and developing a set of unified practices and sharing data on outcomes, the results are generally pretty good for the kids.