The Accuracy of Raters in 360-Degree Feedback (Or Not)

In most 360-degree feedback interventions, it is fairly common to compare one’s self to those of other raters such as the manager, direct reports, colleagues, and external customers. The type of raters that should provide feedback to participants will depend on a number of factors including:

• Purpose of the 360-degree feedback process

• Job level of the participant

• Competencies being assessed

• Relevant stakeholders that have had an opportunity to provide constructive feedback

How many raters are necessary to provide meaningful and accurate 360-degree feedback?

The answer, of course, is only one rater but we just don’t know who is “all knowing” and perfectly accurate in their observations. So, ideally we should get a “sampling” of data from those around the participant who can provide a complete view of one’s strengths and potential development areas.

Ken Nowack, Ph.D. often uses the analogy of sitting down with a child and making a puzzle with them and asking the question, “how many puzzle pieces do you need to assemble to have confidence that what you are making resembles the picture on the puzzle box cover” as a way to answer how many raters do you need in 360-degree feedback. He answers this by suggesting that the more puzzle pieces we assemble correctly, the more confident we become that we are truly seeing the image that is also on the cover of the box but we don’t need to assemble 100% of the pieces to verify this.

Indeed, all we need is a “critical mass” of puzzle pieces to be assembled to trust we are seeing the true “picture” of the puzzle. Like making puzzles, when we ask a large group of raters for feedback we begin to see our behavior with more confidence and clarity. In fact, there is some research that suggests what this “critical mass” of feedback is in order to reach a level of confidence that others are accurately experiencing our behavior and can identify signature strengths and development opportunities.

Research by Greguras and Robie (1995)1 suggests that the optimum number of raters involved in most 360-degree feedback projects would require at least 4 supervisors, 8 peers, and 9 direct reports to achieve acceptable levels of reliability (.70 or higher). Of course, this statistical standard may not be practical in many circumstances when leaders have only a few direct reports or one manager they have worked for over many years.

Their findings do suggest that inviting more, rather than, less raters would be helpful to ensure accuracy and a large enough rater pool to make the 360-degree feedback findings relevant and useful. Inviting and having too few raters in each rater category may limit the meaningfulness and accuracy of the feedback for professional and personal development.

Coach’s Critique:

There are a couple of things to keep in mind when selecting raters for the 360-degree feedback process. First, how many raters? And, who to select as qualified raters?

In my practice, my clients often feel that feedback from a 360-degree feedback process is inaccurate and subjective. I generally agree with them. The idea of being evaluated based on people’s perception can seem somewhat illegitimate. Every person might have a different opinion, and whose opinion is valid? Some people have negative perceptions of everyone they come across. This is exactly why choosing MORE raters is very important. The more people there are to rate their views, the more likely the feedback is to hold some level of truth. One person’s experience of an individual may be completely different than another person’s experience. However, if four individuals have a similar experience, then isn’t there some truth to it?

At the same time, selecting a bunch of raters that do not have contact with the feedback recipient might run a couple of risks. It’s important to ensure that the raters have known the participant for some period of time, and understand the nature of the work of the participant. For instance, recently hired team members might not be suitable candidates, as they might not have had enough time to evaluate his or her behaviors on multiple levels.

Therefore, it’s important for coaches to help clients choose the appropriate raters. Strategizing in such a nature can make or break the success and effectiveness of the 360-degree feedback process.

Share this:

Related

Dr. Sandra Mashihi is a senior consultant with Envisia Learning, Inc. She has extensive experience in sales training, behavioral assessments and executive coaching. Prior to working at Envisia Learning, Inc., She was an internal Organizational Development Consultant at Marcus & Millichap where she was responsible for initiatives within training & development and recruiting.. Sandra received her Bachelor’s of Science in Psychology from University of California, Los Angeles and received her Master of Science and Doctorate in Organizational Psychology from the California School of Professional Psychology.

If You Enjoyed This Post...

You'll love getting updates when we post new articles on leadership development, 360 degree feedback and behavior change.
Enter your email below to get a free copy of our book and get notified of new posts:

Thanks for the post, I agree that for 360 Degree Feedback to work having the correct number of raters is important – We normally advocate inviting between 4 – 8 people per population group. Another very important thing to consider when selecting raters is to make sure that those chosen will be open, honest, constructive and whose feedback will be valued by the participant

About Envisia Learning

Envisia Learning has been helping leaders, consultants and coaches deliver real and lasting behavior change in organizations for over 25 years. The company’s 360-degree feedback assessments and online goal-setting tools merge their expertise in psychology, technology and coaching to offer a complete behavior change system. We invite you to look around our website or contact us to learn more.