My View of the Microsoft Stack Ranking article doing the rounds

The article on Microsoft stack ranking certainly made its rounds around the office. I am no HR expert, but some of the content was not consistent with my understanding and VERY limited experience with stack ranking. So I thought I would go out on a limb and comment in public.

Eichenwald’s conversations reveal that a management system known as “stack ranking”—a program that forces every unit to declare a certain percentage of employees as top performers, good performers, average, and poor—effectively crippled Microsoft’s ability to innovate.

My limited experience of stack ranking is small units do their own analysis, which is then calibrated amongst other units (involving lengthy meetings and strenuous discussions) to have a go at ranking a large population of people (hundreds). So done right, stack ranking does not involve each small unit coming up with final rankings of people. The bottom of one group can be high if the whole team is strong. The critical part missing is that different teams are blended as carefully as possible so people from different teams are relatively ranked. This does need to be done carefully. But the article does not completely describe stack ranking, and does not explain why it stifles innovation.

If you were on a team of 10 people, you walked in the first day knowing that, no matter how good everyone was, 2 people were going to get a great review, 7 were going to get mediocre reviews, and 1 was going to get a terrible review,” says a former software developer.

That is not my experience of stack ranking so far. HR knows it is not applicable for populations of 10 people. It is applicable at populations of hundreds of people. When dealing with hundreds, it is more likely that you will have some under performers and some over performers. These do not have to all come from the same team.

“It leads to employees focusing on competing with each other rather than competing with other companies.”

I guess it could if not done well, but I have not seen that myself. The people I deal with focus on doing a good job of their work, not competing with peers. I think if someone demonstrates they are doing a better job for the company then that is good for the company. The important thing is to make sure people are rewarded when they do good for the company. In some cases good for the company means focusing on competing. But I think more often it means focusing on the customer and doing your best for them.

When Eichenwald asks Brian Cody, a former Microsoft engineer, whether a review of him was ever based on the quality of his work, Cody says, “It was always much less about how I could become a better engineer and much more about my need to improve my visibility among other managers.”

Is this suggesting being visible to managers is a bad thing? Managers should be kept in the dark about what people in teams are doing? A part of collaboration is working with other teams. If done only to have exposure that is not good. But most managers I have met assess people they meet. They are pretty good at judging if someone is a good engineer. So just being exposed to other managers does not mean they get a better review – being good is what gets them the better review.

According to Eichenwald, Microsoft had a prototype e-reader ready to go in 1998, but when the technology group presented it to Bill Gates he promptly gave it a thumbs-down, saying it wasn’t right for Microsoft. “He didn’t like the user interface, because it didn’t look like Windows,” a programmer involved in the project recalls.

Err, what has that got to do with stack ranking? The same goes for the remainder of the article. I 100% agree that stifling innovation is a bad thing. But for me, the article does a weak job of linking stack ranking with stifling innovation. Rewarding undesirable behavior can stifle an organization, making wrong choices, but stack ranking? I don’t quite see the connection.

Now, while I don’t completely agree with the article, I think there are a lot of good points there. It is definitely an interesting read. Some of the points do resonate with me. However, care must be taken to make sure innovation is not stifled. What convinces me more is rewarding desirable behavior is beneficial. If you want innovation, reward people who attempt it. Evaluation of people as a part of that reward process (to determine who to reward).

But it is not stack ranking that is the problem – its punishing people trying to innovate that is the problem I see described in the article.

Share this:

Like this:

Related

2 comments

“HR knows it is not applicable for populations of 10 people. It is applicable at populations of hundreds of people. When dealing with hundreds, it is more likely that you will have some under performers and some over performers. These do not have to all come from the same team.”

Wrong. One is compared against others in the same group, and with those at the same level. A group doesn’t have 100s of people, may be 50. And those at the same level as you will be about 10. So you stack rank 10 people.

I don’t know what other organizations do – I only know what I have experienced. So I certainly cannot comment on how it got applied in your organization.

Where I am small teams get stack ranked (5 to 10 people approx). Then the different stack ranks are merged to form a bigger stack rank of a bigger population. Having the smaller groups presorted makes this merge quicker. It is definitely not round robin – it is not guaranteed that each smaller team will have someone near the top. The stronger teams have more people near the top than weaker teams. For my team this is around 70 people. We then take this stack rank and again merge with other teams with a total population of ~700.

We do also do band level checks. This is done as a safety check. Are we sure that we are not saying the higher level folks are great at the price of lower level folks? People should be rated according to position. So the band level ranking helps make sure we have a consistent definition of expectation across a larger organization. We try to get close to the curve per band, but its not exact. We don’t want to penalize people in a band just because there are a bunch of good people within the band. We don’t have hundreds of people per band (particularly at the more senior bands) so you do need to be careful to have a bit of flexibility for the smaller populations.

I cannot comment on what Microsoft does internally having never worked there. I can only comment on my personal experience with it.