This blog on Texas education contains posts on accountability, testing, college readiness, dropouts, bilingual education, immigration, school finance, race, class, and gender issues with additional focus at the national level.

Monday, August 16, 2010

This article in the times is creating quite a stir. It's capturing some of the parameters of the larger public debate today about teacher effectiveness and how to measure it.

Here are To this, I will add Dr. Stephen Krashen's critique of the value-added hypothesis presented here:

A recent LA Times article, "Who's teaching L.A.'s kids?" (August 15), presented readers with the results of an LA Times-sponsored "value-added" analysis of teaching in the Los Angeles Unified School District. The statistical analysis was done by an economist, and was supplemented by classroom observations made by LA Times reporters.

"Value-added" appears to be a common-sense idea: Teachers are rated by the gains their students make on standardized tests of reading and math. The assumption is that good teachers produce large gains and poor teachers produce small gains or may cause back-sliding. The Times assumes that the value-added method is a valid measure of teacherquality. It isn't.

Problems with value-added analyses

Value-added evaluations of teachers make several assumptions.

First, they assume that higher test scores are always the result of teaching. Not so. Test scores are influenced by other factors:

- We can generate higher scores by teaching "test preparation" techniques, that is, strategies of getting higher scores without students learning anything, e.g. telling students when and how toguess, and familiarizing students with the test format.

- We can generate higher scores by testing selectively, e.g making sure the lower scorers are not in school the day of the test.

- And of course we can generate higher scores by direct cheating, getting inside information about specific test questions and sharing this with students.

Second, value-added analyses assume that teachers are randomly assigned to classes. They aren't. Some teachers are given high-achieving students who will make rapid gains on standardized tests, and some teachers are consistently assigned to teach lowerachieving students who will not make clear gains.

Third, value-added analyses assume that the value-added score for a teacher is stable, that a teacher producing high gains one year will always produce high gains. But studies show that value-added estimates for individual teachers can be unstable over time (Schochet and Chang, NCEE 2010-4004).

There is also evidence that a teacher's value-addedscore can be substantially different for different reading tests (Papay, 2010, American Educational Research Journal 47,2).

Fourth, there is always some fluctuation in scores. Even if all teachers were equally effective in raising test scores, a value-added analysis would still find students of some teachers making highergains than others, due to random factors.

Finally, some standardized tests focus on knowledge of specific facts and procedures. Teachers who prepare students for higher scores on such tests are not teaching, they are simply drilling students withinformation that will soon be forgotten.

Neglected factors

The heavy focus on measuring teacher quality can give the false impression that teacher quality is everything. Study after study, however, has shown that poverty is a stronger factor than teacherquality in predicting achievement. The best teachers in the world will have limited impact when children are undernourished, have high levels of lead in their bodies, live in noisy and dangerous environments, get too little sleep, and have no access to reading material.

Beyond Cold Fusion

The scientific world was outraged when cold fusion researchers presented their work to the public at a press conference before submitting their results for professional review. The Times has gone beyond this: They clearly have no intention of allowing professional review, and feel that it is their right to present their conclusions on the front page of the Sunday newspaper.

The Times also supplemented their findings with comments from reporters who observed teachers in their classes. This procedure sends the message that the Times considers educational practice to be sostraight-forward that it requires no special background.

The Times is a newspaper, not a scientific journal. It has, however, been practicing educational research without a license. Would we accept this in other areas? Would we trust the Times to do a value-added analysis of brain surgery, with reporters critiquing surgical procedures?