One Way to Reduce Gender Bias in Performance Reviews

Executive Summary

Love them or hate them, performance evaluations are staples of the modern workplace. But research shows that quantitative performance ratings are far from objective; while they may make the task of comparing workers easier for managers, they are riddled with gender bias. One solution might be a simple change the rating scale you use to evaluate employees. In an experiment that compared a 10-point rating scale to a 6-point one, researchers found that people were equally as likely to give high ratings to men and women using the 6-point scale. When presented with a 10-point scale, however, they were much more likely to only give 10s to men. Why? The cultural connotations the number “10” has, and how it’s “excellence” been coded by gender over time.

HBR Staff/PeopleImages/Getty Images

Love them or hate them, performance evaluations are staples of the modern workplace. Quantitative ratings have long been touted as impartial tools for measuring worker quality and ensuring fairness in promotion and compensation decisions.

But more recent research shows that quantitative performance ratings are far from objective; while they may make the task of comparing workers easier for managers, they are riddled with gender bias. Research consistently shows that people give men higher performance ratings than women, even when their qualifications and behaviors are identical. Even artificial intelligence algorithms prefer men.

Over time, such biases harm women’s career prospects and contribute to gender gaps in earnings and the underrepresentation of women in top-level positions.

While evidence of gender inequalities in performance evaluations continues to mount, far less is known about remedies. In a new study in the American Sociological Review, we identify one potential way to reduce gender gaps in quantitative performance evaluations that is time- and cost-effective: switch the rating scale.

We studied one school of a large, North American university that — for reasons unrelated to gender — changed its faculty teaching evaluation system from a 1-10 to a 1-6 scale. In total, we looked at 105,034 student ratings of 369 instructors in 235 courses. One unique aspect of our data was that we were able to compare how the exact same instructors teaching the exact same courses fared under the different rating systems.

As expected, under the 10-point system, men received significantly higher ratings than women in the most male-dominated fields. But what we found next was surprising: switching to a 6-point scale entirely eliminated this gender gap.

To figure out why, we conducted an experiment where we gave 400 students identical transcripts of a lecture, which they were told was given by either a male or female instructor — Professor John Anderson or Professor Julie Anderson. We then randomly assigned whether they would rate the instructor on a 10-point or 6-point scale. We also asked students to write down the words that first came to mind when they thought of the instructor’s teaching performance.

As in our field study, we found a large gender gap in ratings under the 10-point system, which again disappeared under the 6-point one. But this time we gained some insight into why the scales mattered. When using the 10-point scale, students readily assigned 10s to John Anderson, but they were reluctant to do so for Julie Anderson, instead giving her 8s and 9s.

The top score on the 6-point scale, in contrast, did not come with such strong performance expectations. To receive a 6/6, it was enough for instructors to be perceived as very good; they didn’t need to be seen as brilliant or extraordinary. As a result, though students using the 6-point scale were still more likely to use superlatives to describe John’s teaching performance, they were just as willing to assign 6/6 marks to Julie as to John. The underlying stereotype of male brilliance was still present, but a 6/6 rating didn’t elicit as strong cultural images of perfection and brilliance as a 10/10, so the 6-point scale limited the expression of bias, and the gender gap vanished.

Skeptics might dispute the role of bias in these results. They might argue, for example, that extraordinary professors are more likely to be male, and that the 6-point scale simply leads raters to lump together truly brilliant performance with what is, objectively, merely good. In this view, the 6-point scale doesn’t limit the expression of gender bias; it’s simply a blunt instrument that, unlike the 10-point scale, fails to differentiate between great male teachers and merely good female ones. Our experiment, however, addresses this concern. The only thing that varied between the instructors was whether students believed them to be male or female; their lecture transcripts were identical.

These results have implications far beyond the university setting. Numeric performance ratings are everywhere, and in an era obsessed with data and metrics, we often act as if our tools for measurement and evaluation were neutral instruments. They are not. Even factors as seemingly small as the number of categories on a rating scale can have a significant effect on inequality.

The upside is that we are not powerless when it comes to gender inequality. It’s difficult to overcome our individual biases, but once we recognize that biases are also built into our evaluation systems, we can change those systems. Like the university we studied, organizations of all kinds can experiment with their metrics and evaluation tools and discover new ways to move the needle. We can make progress toward equality one experiment at a time. This is not to say that such interventions will eliminate gender stereotypes, which research shows are deeply ingrained and highly resistant to change. Rather, they can, in sociologist Joan William’s words, interrupt their effect on ratings and begin to bridge gender gaps in careers.