Faculty Pay 'by Applause Meter'

By

It's not like professors to think that they are so well compensated that it's not worth hoping for a $10,000 bonus. But out of more than 2,000 faculty members at Texas A&M University's main campus, only about 300 have agreed to vie for a bonus being offered for their teaching -- and all they would need to do is have a survey distributed to their students.

The reason for passing on a chance at $10,000 is that many professors are frustrated by the way the money is being distributed: based solely on student evaluations. Numerous studies have questioned the reliability of student evaluations in measuring actual learning; several of these have noted the tendency of many students to reward professors who give them higher grades. Further complicating the debate is a sense some have that the university is endorsing a consumerist approach to higher education. The chancellor of the A&M system, Michael D. McKinney, told the Bryan-College Station Eagle: "This is customer satisfaction.... It has to do with students having the opportunity to recognize good teachers and reward them with some money."

That comment didn't go over well with many professors who believe that their job responsibilities include -- at least sometimes -- tough grading, or challenging student ideas or generally putting learning before student happiness.

"That customer idea really, really bothers me," said Clint Magill, a genetics professor who is speaker of the Faculty Senate at College Station. "You can't buy the grade or the degree so how can we be the same as a consumer thing? It's like saying 'If you give us professors this much, you get your grade.' If we have any principles at all, it doesn't work that way."

University administrators have defended the plan and they point to research that they say shows student evaluations can be reliable. But the researcher who did the studies Texas A&M is citing said in an interview Monday that he never endorsed evaluations of the sort A&M is using or the way they are being used -- and that this all runs counter to his key findings.

Origins of the Idea

Many colleges and universities use student evaluations of teaching as part of faculty reviews, and students flock to Web sites like RateMyProfessors.com to see what other students say about instructors. But RateMyProfessors is at least theoretically not part of formal reviews, and official student evaluations tend to be used as just one part of a review of teaching. Programs like the one at Texas A&M are rare -- although the University of Oklahoma is doing a pilot project in its engineering and business schools with a similar bonus offer.

The idea of offering the bonuses has been talked up in Texas by the Texas Public Policy Foundation, a conservative-leaning think tank with ties to Gov. Rick Perry, a Republican. A&M officials started talking about the bonus idea after all of the university system's regents -- appointees of Governor Perry -- attended a seminar, organized by the foundation, during which the bonus idea was promoted.

An essay by the foundation's president, Brooke Leslie Rollins, argues that faculty members are too focused on research and need incentives to pay attention to students. In endorsing the idea of relying on student evaluations, Rollins writes that "research shows that students are excellent judges of learning, especially when deliverables for a course are clearly stated" and adds that the "current structure at Texas universities gives teachers very little incentive to strive toward excellence."

Then McKinney, the chancellor, sent a letter to all faculty members this fall telling them that the system was starting the program at three campuses -- College Station, Prairie View A&M and Texas A&M-Kingsville. McKinney asked faculty leaders to join with presidents in devising the form students would be given to evaluate the professors -- but many professors immediately balked, and the Faculty Senate at College Station specifically advised non-cooperation with the effort.

As time has passed, the university has taken steps that have improved the bonus plan -- even according to its critics. For instance, the university has decided not to have all faculty members who volunteer to seek the bonus compete against one another, but will instead have individual colleges' professors compete. This change responded to criticism noting that many science courses tend both to have lower grades and lower student evaluations. In addition, the university announced that these student evaluations would not be used for tenure or promotion purposes.

The questions that are being used in the reviews include the following (to which students respond on a five point scale): My instructor seemed to be knowledgeable about the subject matter. My instructor created a classroom atmosphere that was productive/conducive to learning. My instructor was enthusiastic about the subject matter of the course. I would take another course with this instructor, if possible, or recommend this instructor to other students. I recommend this instructor for a teaching excellence award.

Magill, the Faculty Senate speaker, said serious problems remain with the system. "Any evaluation of teaching that doesn't include some measure of learning has some real problems," he said. Magill said that there is nothing wrong with using student evaluations, but that they need to be examined not just for scores, but for context based on the course, the students, their achievement levels, and their success at mastering key skills. These concerns aren't just theoretical. A major study by Ohio State University in 2007 -- in which student reviews were linked to actual learning by examining students' grades in subsequent courses based on the course they reviewed -- found absolutely no correlation between student evaluations and actual learning.

What the Ohio State researchers did find -- as many other studies have found -- was clear correlation between the grades the students receive and those they give their professors. And that's another worry for Magill. "My biggest concern is the people who would be the worst teachers might think 'I can really raise my scores by improving grades.' They won't, but they'll mess up the grading system," Magill said.

What the Research Says

Frank B. Ashley III, vice chancellor for academic affairs at the A&M system, said that faculty concerns are understandable because "there is suspicion about anything that comes from the system office." But he said that the idea is to honor good teachers, and that professors will come to see that.

Ashley strongly disputed the idea -- widely held by researchers -- that student evaluations are not reliable and encourage grade inflation. He characterized the debate as unsettled. "You'll find studies that say it's true and studies that say it's not true," he said. Asked for a study that shows that student evaluations are reliable and don't encourage grade inflation, he said that the article he used in working on the policy was "Student Rating Myths vs. Research Facts," and was published in 1999 in the Journal of Personnel Evaluation in Education.

The author, Lawrence M. Aleamoni, is now retired as a professor of education at the University of Arizona. Reached Monday, he said that he did in fact show in his article that some student evaluations can be reliable. But he said that several parts of the Texas A&M policy run counter to his findings and recommendations.

For example, Aleamoni said that the only times he has found student evaluations to be reliable is when they are nationally devised and normed, and not when they are "home grown," as A&M's questionnaire is. Further, Aleamoni said that his research found that students may answer very specific questions about their professors reliably. But broad questions -- such as "Does this professor deserve a teaching award?" -- are the sort that students tend to answer based on student grades.

But Aleamoni said that even if his research suggests that some student evaluations -- designed in ways that differ from the Texas A&M approach -- can be reliable, he has always stressed that these evaluations should never be the sole basis for a decision about the quality of someone's teaching. "Students are only in a position to judge performance in the classroom," Aleamoni said.

Any real evaluation of teaching, he said, must include peer analysis of such issues as, "How well was the course designed? Are the materials current and up to date? Have they set up the right kinds of standards for the students?" And students aren't in a position to judge these things, he added.

Another Approach

Cary Nelson, president of the American Association of University Professors, said that the A&M system sounded like the idea of paying professors "by applause meter," which he said was offensive. "This corrupts peer evaluation, diminishes the faculty role, and encourages grade inflation," he said. "You give them A's and you get 10 grand."

At the same time, Nelson said that the idea of rewarding long-term commitment to teaching was something he applauded. He said it makes much more sense to design rewards that look at the long term and that feature a variety of measures, not just student reviews. The University of Illinois at Urbana-Champaign, where he teaches, has awards for teaching in which student evaluations are considered but only over the long term, and only with the recommendation of a department chair, who would not put forth a nominee known for giving everyone A's, or hold back on nominating a tough grader. Further, the award requires evidence such as work as a mentor, developing new courses and playing a leadership role on curricular matters.

There are different awards for different teaching levels, but the prize -- like A&M's -- is significant. Winners of the graduate teaching prize receive $5,000 in cash immediately and a recurring $3,000 boost in their salaries. (Nelson noted for full disclosure's sake that while he has not won this prize, his wife has.)

A process like the one used in Illinois means that long-term effectiveness is rewarded, and the process "lends authority to the award." The process at A&M, he said, "sounds like public relations."