As Bersin’s™ recent High-Impact Performance research indicates, the discipline of performance management is in the midst of deep and fundamental changes.1 As this evolution has moved from the halls of HR, to the C-suite, to the popular press, and the forefront of many employees’ minds, we’ve noticed that a couple of key terms have come to be used interchangeably along the way—specifically, ratings and rankings. This blog offers a quick perspective on how Bersin uses these terms and why we think it is important to distinguish between the two.
Ratings are a way to evaluate a person’s performance relative to a standard or expectations. Pick a scale—numbers, letters, descriptors, or whatever you like—define it, and assign performance of people (or teams) to the various points on the scale. Voilà! You have a rating system.
In an earlier blog post on Ratingless Rewards, we discussed how few individuals “like” performance ratings, but the vast majority of organizations use them. And outside the workplace, we are rated and evaluated every day—by various methods, and often, by our own choosing. Just think about your smart watch’s trigger to stand up if the activity monitor detects that you’ve been sitting down for too long or the feedback your smartphone provides about the amount of time you’ve spent online.
Our recent High-Impact Performance research confirmed that ratings are not dead—and simply eliminating them is not the answer to improve performance management.2 We’ll dig deeper into ratings and their future in upcoming research.
Take a rating scale as described above and overlay the decision that only a certain percentage of a given population should or can receive each rating. For example, consider a three-point scale. It might dictate that 20 percent of the population should be rated High, 70 percent of the population rated Middle, and the remaining 10 percent rated Low. There’s your rating distribution.
Unlike ratings, distributions are not something we encounter in our daily lives. Walking 2,000 steps daily does not make us healthy just because everyone else walked only 1,000 steps. Distributions also tend to create competition rather than improve performance or productivity.
A key reason why organizations establish rating distributions is to enable differentiation of pay based on performance. Budgets are a zero-sum game, so there can’t be too many high performers, or the rewards math doesn’t work.
Organizations take varying approaches to enforcing adherence to a rating distribution:
- A distribution guideline merely suggests that ratings follow a certain distribution.
- A recommended distribution allows deviation from the distribution on a case-by-case basis.
- A forced distribution strictly forces ratings into adherence with the distribution.
Many organizations use some version of the bell curve as their rating distribution model—even though that model may encourage mediocrity.3 As discussed in one of our recent blog posts about Pay for Performance, a power law distribution is more representative of actual contributions.
Rankings (also known as “Forced Ranking,” “Stacked Ranking,” “Rank and Yank,” etc.) involve comparing people to each other in lieu of—or in addition to—comparing them to a standard. In a ranking system, there are winners and losers—but no ties. Take a group of employees and rank the highest performer as number one, the next best as number two, and so on for the entire population being evaluated. In some cases, this can be a whole company, depending on the performance management approach used. As such, many people feel that, regardless of whether the final rankings are ever shared with employees, their mere existence encourages a culture of competition—sometimes to the detriment of teamwork, collaboration, and the greater good of the organization.
Some “ratingless” organizations have adopted a “behind-the-scenes” ranking approach to help determine pay. We describe the benefits and challenges of this approach in more detail in our Ratingless Rewards research.4
Another permutation of ratings and rankings is that, with a ranking system in place, ratings can easily be derived using a formula. For example, a group of 100 employees—ranked 1 to 100—could be assigned ratings using the three-point rating scale and distribution discussed above. In this situation, employees ranked 1 to 20 are High, 21 to 90 are Middle, and 91 to 100 are Low. But the converse—attempting to derive rankings from ratings—would, of course, not be possible.
We’ve seen some confusion of these terms in recent business media, so we wanted to clarify their definitions and touch on the merits and challenges of each approach. Please keep an eye out for future Bersin research on the topics of performance management and rewards. If you have an interesting story about how ratings, rankings, or both are used at your organization, we would love to hear from you. Please contact Kathi Enderes (firstname.lastname@example.org) or Pete DeBellis (email@example.com).
Bersin members can access the High-Impact Performance Management series and The Ratingless Rewards Dilemma: How to Determine Pay without Performance Ratings in the Bersin Library. Not a Bersin member but want to know more? Visit the Bersin website.
1 Seven Top Findings for Enabling Performance in the Flow of Work, Bersin, Deloitte Consulting LLP / Kathi Enderes, PhD, and Matt Deruntz, 2018.
3 The Myth of The Bell Curve: Look for The Hyper-Performers, Forbes.com / Josh Bersin, February 19, 2014,
4 The Ratingless Rewards Dilemma: How to Determine Pay without Performance Ratings, Bersin, Deloitte Consulting LLP / Kathi Enderes and Pete DeBellis, 2018