May 15, 2010

Teaching evaluations are a broken system.

For those not familiar, pretty much all colleges and universities have course evaluations at the end of a term where students have the opportunity to provide anonymous feedback about the course and the teacher. It’s usually a bunch of bad survey questions, and, at least on my campus, you’re ranked relative to other profs or TAs in your department and in the campus at large.

A couple problems with this. First, the numbers are meaningless unless they are compared to something else (indeed, many of the questions here are phrased specifically as “how would you rate [blah] compared to other [blahs] on campus?”, making them especially meaningless standing alone), but comparing everyone against each other is a zero-sum game. No matter how good your teaching is on campus, there are going to be people on the bottom of the distribution.

Survey numbers in general are a really bad way of getting high-stakes data (for one, there’s a self-selection bias in completing it at most schools), and even worse is using data from a “survey” that has had very little thoughtful survey design and no validity checks put into it. And these data are high-stakes–the more teaching-oriented the school, the more high-stakes they are. But the students don’t know this… nearly always these surveys are phrased in terms of “opportunities to improve the course,” but behind the scenes they are used in hiring, merit-raise, and promotion decisions. It might be interesting to run a study (too lazy to see if one has been done already, though I wouldn’t be surprised if it has)–rate a professor with the framing of the instructions emphasizing course improvement or with emphasis on hiring/raise/promotion opportunities. I mean, it’s kind of amazing to me that we don’t disclose how this data is used.

That brings me to the last point: the asymmetrical nature of the feedback. Teachers have to give high-stakes, potentially ego-threatening feedback to students (in the form of grades and evaluation of student work), sometimes to their faces and always non-anonymously, but students are given the opportunity to provide high-stakes, threatening feedback anonymously. Most or all schools have students complete the evaluation before the final grades are released (but the teacher doesn’t receive the results until after); but obviously they don’t complete it before they have received grades and feedback on everything else in the course thus far. Clearly students will provide feedback differently if the professor/TA/whoever knows who they are–that is the whole point of anonymous feedback, to get their “real” opinion. But there is not such thing as a “true” opinion, and the asymmetry of evaluation opportunities is potentially harmful. For one, it leads to grade inflation and softening of course material–when the class is too hard or students don’t all get As, course evaluations go down. Course evaluations are high-stakes. So instruction becomes subservient to the anonymous feedback of students, who, just like on the internet, will say the kinds of things or provide the kinds of ratings that they would never dream of saying to someone’s face. Here’s a gem from the most recent teaching evals for my husband’s cousin, who is a professor of media comm: “Dr. [cousin] wears baggy pants that looks like he’s wearing a diaper.” On one hand, maybe he didn’t know and who else is going to tell you that? On the other hand–really?? This how seriously his students take these evaluations; this what will (partially) decide his next merit review?!