News feminist philosophers can use

Teaching evaluations December 3, 2012

There is ample evidence that implicit bias skews course evaluations to the disadvantage of minorities and women–particularly unattractive women. Some time back I recall someone at this site remarked that she was told that the surest way to improve one’s evaluations was to lose 20 pounds. And it’s true.

Course evaluations are mandatory at my university and the plan now is to switch to online evaluations the results of which will go directly to administrators. Currently in my department at least tenured faculty don’t have to submit evaluations to their chair or other administrators. I never have.

I’d like to (1) compare notes about policies at other universities regarding course evaluations and (2) make the case that this use of evaluations should be resisted. It is a plain empirical fact that assessing women on the basis of evaluations puts us at a disadvantage–particularly if we’re unattractive.

I hope someone at this site can write on this. Setting up what is in effect a university-sponsored ratemyprofessor.com is detrimental to the interests of women, minorities, and unattractive people in the profession, it sends the message to students that going over instructors’ heads to the bosses is appropriate behavior, and it undermines the professional autonomy of all faculty.

This seems an excellent topic to discuss. Do put your thoughts in comments!

16 Responses to “Teaching evaluations”

It’s very disturbing that professors get a hotness rating in ratemyprofessor. Looking over the profiles of some randomly chosen profs, it seems that especially that hot male professors benefit from their looks.

The ratemyprofessor chili pepper assessment is disturbing but ratings at this site, as far as I know, aren’t used to make decisions on reappointment, tenure or merit pay. And most users know enough to take the results at the site with a grain of salt.

If however a university runs it’s own in-house ratemyprofessor program, even without the chili peppers, it’s making the ratings official, giving its imprimatur. And that is very much worse.

My situation is at a CC, so I’m not sure of its relevance to universities, but:

Administering college-authored student evals are required for all faculty. Tenured faculty must administer them in all Fall term classes, non-tenured and adjuncts in all of their classes. Full-time faculty must include a discussion of this student feedback in their yearly self-assessments. The feedback results are factored into tenure and promotion decisions.

The scores are considered public information (as are retention and pass rates), but the college doesn’t broadcast these widely – so there isn’t the equivalent of a ratemyprofessor web page. But, faculty can compare their scores to others from similar classes on their campus and college wide.

If anything, there is increased pressure from the state (FL) to place even more emphasis on such results. There’s a current push to use them in cases of revoking tenure.

Here is a recent paper on the research on student ratings: “Time to Raise Questions About Student Ratings” Linda Nilson, _To Improve the Academy_, v 31 (Oct, 2012). In the 80s there were studies supporting the idea that student ratings correlated well with other assessments of student learning, and the suggestion was that they could serve as a limited proxy (but never stand on their own–they should be used only as a part of the assessment of an instructor). In more recent studies, this link is gone. Now they correlate only with student satisfaction, not at all with learning. She also claims that they are more biased than they were in the past, including “over a dozen variables extraneous to learning and largely beyond faculty control.” Here is a Scribd preview (that includes the bias section): http://www.scribd.com/doc/105085714/14/CHAPTER-14?sh=b1d8eaba70e06981

I am taking online courses and I appreciate the chance to do online evaluations. Do I hope that they will influence teaching – yes. Do they have to go to admin – I don’t see why. As a teacher my own evaluations influenced me to get a non-teaching job. My ratings were not any worse than those of my post-doc supervisor but he had apparently been tenured with his while I had to have a friendly talk with the Head of Department. More research should be conducted on any biases in evaluations which should include admin people as participants so that they can’t ignore the research. Second an online ranking system for admin people should be set up, because some bad apples do get to move around since their activities are not widely known. Third unattractive faculty should probably give their evaluations before the final exam to try and keep the answers honest or at least polite.

I am a relatively young and relatively attractive white woman, and I fairly consistently receive evaluations that indicate that about half my students simply do not like me. My uni’s system shows each person’s responses all together, so you can see when someone marks you as ‘very poor’ in every single category – including the ones about starting class on time, handing work back in a timely fashion, and following a clear grading procedure, all of which I am objectively at least ‘good’ at! I am a fairly no-nonsense person, and I expect students to be on time to class and to be prepared with whatever materials they’re supposed to have, and I think that’s just not congruent with whatever students expect when they see me – fun party girl, maybe? (Which I may be at times, when I choose, but certainly not while teaching!) It’s interesting, anyway, and it’s one reason I’m doing my dissertation research on the experiences of attractive women. :-)

I just read the free preview of the paper Kimberly linked to…a quick comment about the apparently common experience of outright lies (or apparent misunderstandings) on evaluations (eg., students saying a prof showed up late when she never did, or didn’t provide feedback when she clearly did…) — if you point out such things to your Dean when they haul you in to account for your bad evaluations, he will simply say “Well, then, you have to ask why are the students lying about it? Why are they not seeing that you’re providing feedback?” implying that they don’t like you and that’s YOUR fault, you must be doing something wrong to elicit such behavior…

This is particularly uncomfortable for me today. I have student who will be appearing before an academic judiciary board because of plagiarism in my course. The Dean said he should continue to finish out the course. So, today he got to ‘evaluate’ me. Neat.

I wouldn’t mind having student evals as a tool if I could set it up and the results were available to me primarily. I have questions about my class—what readings do students like best, what do they like worst, how many people actually do the reading, how are my grades compared to what students make in their other classes, how do students think I can improve my lectures or the course in general—and it would be nice to be able to ask them to students in a format where they wouldn’t feel bad about hurting my feelings or whatever. Unfortunately, I can’t control the questions asked on evals at my school, and on the whole the evals matter more for my future employment than they do as a tool for helping me improve my teaching. A good teacher should be loved by good students and hated by bad students. Unfortunately, the current eval system does not take that into account. You must be loved by all students even the ones who objectively deserve a poor grade. (Come to think of it, I also wouldn’t mind if we didn’t have to give out “grades” either.)

As a grad student in philosophy, my models for teaching came from TAing for large lecture classes (Intro to Philosophy, Medical Ethics, that sort of thing). Powerpoint-based lectures, with no more than 20% student interaction, were how I had been taught to teach, so that’s what I did when I began to teach my own courses. Last semester, I switched to a radically different seminar format, for which I give no lectures whatsoever. At most, I’ll spend five minutes unpacking an important point that the students can’t get on their own. Generally my role during class time is to moderate discussion and occasionally raise questions that nudge the students through the day’s topics.

None of my students had taken a seminar before (I asked). There was also a steep learning curve for me, and I learned a lot about how and how not to run a successful seminar. All together, it was a challenging class for both me and my students. And so I received the worst student evaluations (on average) of my career so far. Fortunately, I know enough statistics to be able to decipher what happened: out of about 35 students, 3 or 4 hated my class — writing things like “he doesn’t teach” on their evaluations — and this small but highly opinionated group were able to shift my evaluations. The majority thought it was just as good, if not better, than a regular lecture-based class. This analysis gave me the courage to try the format again; this semester was much smoother, and I’m hoping that’ll show up on my evaluations.

I’m on the job market this year. I’d quite like a position at a liberal arts college, the kind of place that takes teaching very seriously. Obviously, my teaching portfolio wraps up with the very bad evaluations from last semester. Do such places care very much about student evaluations? I guess I’m going to find out.

tl;dr: Student evaluations don’t necessarily encourage good teaching. They encourage the kind of teaching that students expect, which strongly discourages experimentation.

I’m a graduate student and TA at a large, relatively prestigious university. We have had internal online evaluations for a quite some time. There are a lot of downsides to the system, but I’ll mention a couple of positives. Occasionally, suggestions on the evaluations have helped me to improve my teaching. Also, because we have this internal system, our undergraduates don’t really use the ratemyprofessor site. The school’s system has more accountability; the online forms are, in theory (and only in cases of misconduct), traceable back to student evaluators by the administration, so students tend not to make vulgar statements or comment directly on instructors’ appearances—they are warned to be careful about such things. (Of course, they still can be quite rude, but no “chili peppers”, at least.)

Here’s one thing I’ve noticed about my evaluations that’s somewhat interesting. For large courses, instructors are able to break down their “scores” demographically. I consistently receive higher ratings (on average) from females than from males (I’m female). I don’t know what to make of it—my classes usually have more men than women, so maybe that’s a factor. There aren’t very many other women teaching in the department, and none of the women (or men) I’ve talked to about it seem to have noticed a similar trend; more often than not, the men seem to rate instructors higher on average. I don’t know whether I should be concerned or whether I can do anything to address it; it’s just sort of curious.

Carl (#12) – Not giving out grades might actually help with the student evaluations as well. At my institution (a tiny very selective public liberal arts college) we don’t give grades, using written evaluations instead. The same goes for teaching evaluations: we have a universal, mandatory form, but it consists of questions calling for short written answers. The results are often thoughtful and quite useful, and they’re generally read in nuanced ways by colleagues and administrators who become very experienced in reading and writing evaluations. (For what it’s worth, complicated issues with Florida state law might lead us to introduce a numerical component in teaching evaluations soon.)

Dan (#13) – From my experience at one liberal arts college, I’d expect that student evaluation numbers will definitely be noticed, but that your explanation will be taken seriously and your initiative in setting up your courses as seminars will be valued. Recommendation letters that address teaching in an informed way are important, and if such a one endorses your explanation of the evaluation numbers, that might be quite helpful.

We have to have these course evaluations for “Quality Assurance” in the UK but they are not explicitly linked to staff evaluations (at my university), just course and programme development. However, we do complain quite a bit among ourselves over whether students *know* whether the teaching is good from one course to the next since they are asked to do the evaluations before they get their exam results. It seems suffused in rhetoric over here about the ‘student experience’ rather than based on any kind of pedagogic rigour or sense. They also tend to highlight issues over which the lecturer has no control, such as computing resources or room allocation.