Imagine an employee comes to you and says they’re not sure why their peer received a larger raise this year when their (the peer's) performance was underwhelming in comparison to others.
You immediately want to react and defend your position, but you find yourself questioning how you arrived at different conclusions about these individuals’ performance. Where do you start?
In the article, we explore our unconscious tendencies toward performance bias, as well as how to overcome these biases so that the division of responsibility, performance appraisals, and reward allocation in an organization can be fair.
There is a wealth of research about human judgment and how swayed it can be by unconscious bias.
Whether the research focuses on romantic partners, authority figures, or other groups in our lives, one conclusion is clear: every human’s judgment is subjective, ever-changing, and fallible. While experts in certain fields may have been able to refine their judgment in ways that avoid many common pitfalls, agreement on “universal truths” across human beings is few and far between.
Judgment about employee performance is no exception. Further, it’s an area that can yield legal consequences if not handled in accordance with labor rules and regulations. So, how do you mitigate common pitfalls and the risk of bias in assessing individual performance?
What is Performance Bias?
Let’s start by understanding what performance bias is. As alluded to above, performance bias is the result of human judgment being subjective, ever-changing, and fallible. Specifically, it is the bias that arises when we judge employee performance; for example, during a review process.
We all have a tendency to feel more comfortable with and like entities that we’ve:
- Been exposed to, and
- Had a good experience with in the past.
On the flip side, our human nature compels us to be more skeptical of entities with which we are less familiar or have had a bad experience with in the past.
Learning these patterns over time results in shortcuts, or biases, which are essentially automatic associations made by our brains so we don’t have to manually process all of the information coming at us at once.
These shortcuts, heuristics, or biases can concern anything— rain after 1 p.m. on a Wednesday, someone wearing a blue shirt, people who are exceptionally tall, etc. They can tell us that we don’t have to fully evaluate a person or situation because the presence of that shortcut has “always” been associated with a positive consequence, or alternatively, a negative consequence. This is a broad and simplistic view of associations made by the brain and resulting judgments in order to provide some context and background.
Bias comes up in all facets of life and work, such as in who we choose to work with (see hiring bias) and in how we perceive the value of their work.
Applied to performance evaluation, these shortcuts our brain creates can skew our perception of an employee’s contribution.
Performance is something that is typically evaluated holistically (in an annual performance review) and in the moment (when an employee excelled in a project or even answered an email really well). There are a variety of shortcuts or biases that come into play with multilayered judgments like these.
Types of Performance Bias
You may be reflecting back on a course you took on psychology at this point, trying to remember some of the common biases covered in the material. Luckily, psychological research has advanced and even helped to consolidate the biases humans fall subject to into being based on six originating beliefs:
- My experience is a reasonable reference.
- I make correct assessments of the world.
- I am good.
- My group is a reasonable reference.
- My group (members) is (are) good.
- People’s attributes (not context) shape outcomes.
Based on the six fundamental beliefs above, it’s easy to start to see connections between some of these beliefs and performance ratings. Let’s take a closer look at these fundamental types of bias and how they specifically manifest as performance bias.
My Experience Is a Reasonable Reference
This can be a form of confirmation bias where the perception of an employee’s performance is based on the reviewer’s opinion of them.
How This Belief Could Yield a Biased Performance Rating
The manager fails to consider others’ opinions of their employee’s performance, instead basing evaluations of the employee’s performance mainly on what the manager can observe directly.
A halo or horns effect may be at play where, if a reviewer has strong opinions about an employee, it becomes difficult for them to see performance data and input from others objectively.
I Make Correct Assessments of the World
The manager believes that while others are biased, they are not.
How This Belief Could Yield a Biased Performance Rating
The manager is overly resolved in their opinion about the individual’s performance and may overemphasize one or a few situations as being disproportionately reflective of the individual’s overall performance.
This confirmation bias may also be directed at other reviewers. The perception is that, if another manager agrees with their assessment of an employee, they are a good reviewer. If they disagree, their input is not seen as valid.
When contradicting input gets discounted, we also get selection bias—a result of data and exclusions that were cherry-picked to ensure a specific outcome. Again, these processes operate almost entirely subconsciously; therefore, we are not assuming malice or disappointing intentions on anyone’s part.
I Am Good
The manager believes they are better at evaluating performance than others and is not receptive to suggestions or feedback about how to evaluate performance more accurately.
How This Belief Could Yield a Biased Performance Rating
In much the same way as discussed above, a manager who believes their assessment carries more weight than those of other assessors leads to an overall performance appraisal that is skewed.
Potential biases like these become especially troublesome when performance is assessed based on a qualitative study of the employee’s work. The manager may be influenced by factors that aren’t directly related to performance. For example, they may praise an employee’s friendly manner and dismiss accounts of low productivity.
My Group Is Good
Biases related to one’s own kind manifest in gender bias, racial bias, or any other preference for a specific peer group.
How This Belief Could Yield a Biased Performance Rating
The manager is more likely to give higher performance ratings to people who look, act, sound, etc. like them. The more the individual comes across as similar to the manager, the greater the likelihood of a higher performance rating, especially in the absence of intentional mitigation of this bias.
People’s Attributes Shape Outcomes
Outcome assessors who hold this belief place absolute accountability on one person. The outcome is therefore seen as a failure or a success in isolation.
How This Belief Could Yield a Biased Performance Rating
The manager assumes that instances of high or low performance are due to individual characteristics and not the result of the situation, type of project, feasibility of the deadline, or any other contextual factors.
For example, in a scenario where an employee’s performance fails to meet their key performance indicators (KPIs), this bias may lead to overlooking major dedication. Perhaps, under the excessively challenging circumstances they worked in, the person’s performance was a great feat. Managers must take care to discern which patterns of behavior on the employee’s part are due to traits and characteristics versus a function of the situation.
Preventing and Mitigating Performance Bias
There are several things you can do to mitigate the biases above. Bringing these biases into awareness is an important first step.
That said, alignment on performance expectations and measurement should happen early and often. Let’s break this statement down into two pieces.
What Are You Rating?
If you told me your organization was struggling with bias in performance ratings, one of my first questions would be, “What are you rating?”
Put another way, what are your performance expectations and how are you measuring them?
Performance in some roles may be easy to quantify. However, the devil is always in the details. For example, your firm may set a goal to have individuals be billable 75% of the time or to achieve client satisfaction ratings of a certain caliber. It doesn’t take too much mental exercise to consider ways that individuals could manipulate outcomes like these, such as by providing a “friendly nudge” to clients to provide high ratings or finding ways to extend billable time that others may not agree with.
Let’s assume that part of the reason you’re reading this article is that performance expectations are not easy to quantify in your line of work. What do you do then?
A Process for Setting Performance Expectations:
Step 1: Start with the desired outcomes of the role or position.
Why did you hire the role? What are the most important things someone in that role could achieve for your firm? What KPIs or objectives and key results (OKRs) would define success here?
Step 2: Evaluate what control the individual in this role has over those outcomes.
For example, if an individual is tasked with helping to improve the profitability of the unit, do they have the authority to limit costs, raise prices, or take other actions in this direction? This may take some time and brainstorming, but it’s important to arrive at outcomes the individual can influence.
It is also necessary to set evaluation criteria that truly delineate between a high-performer and a low-performer in the role.
Step 3: Determine how you can be crystal clear about the expectation and measurement.
Is a KPI, by your definition, a minimum requirement for success or a definition of excellence?
For example, if the individual has a new business goal of $250,000, is “meeting expectations” achieved when the individual hits $250,000-$275,000, or would this be an overachievement? Are they falling short of expectations when they’re in the $225,000-$249,999 range?
Failing to think through the likely outcomes and how each one will be perceived from a performance perspective can result in big disconnects in the review conversation.
Step 4: Agree on the measurement.
Measuring performance can require a significant amount of effort depending on the expectations set. Be realistic about the effort you can invest to measure and track adherence to your definition of success.
Recording performance can also require oversight and/or documentation that can feel invasive or arduous at times. For example, if you choose to use productivity tracking, employees may perceive the constant monitoring as a sign of mistrust. It’s important to be clear on what you’re measuring, how, and why, plus the relationship between these metrics and definitions of good or bad performance at the outset. By doing so, ongoing efforts to collect data or information are taken in the context in which they’re intended.
How Often Are You Revisiting Expectations?
As mentioned above, alignment on performance expectations and measurement should happen early and often.
We’ve already discussed some strategies for aligning on expectations and measurement. But what good are these efforts if the metrics are then cast aside, not to be looked at until review time in 12 months or so?
Setting performance expectations and then failing to revisit them is like withholding an athlete’s stats until the end of the season. Adjustments can no longer be made, and people are just guessing as to how they may be doing in the interim. Coaching may happen along the way, but coaching toward what outcomes?
Depending on how quickly things move in your organization, it may be beneficial to revisit the goals as often as monthly or, at the very least, semiannually.
Many organizations are likely to find that a quarterly discussion proves beneficial, with perhaps a looser conversation monthly or semi-quarterly. All of these conversations should happen with the performance expectations as the backdrop and structure for the conversation, and at least a couple of times throughout the review period, the employee and manager should provide documented feedback on how things are progressing.
While it may be tempting to put off performance conversations until organizational mandates kick in, “shoveling the pile while it’s small” often ends up being the more appreciated course of action by all parties involved— managers, team members, HR colleagues, etc.
What if Performance Bias Still Comes Into Play?
We’ve discussed a lot of concepts that reside at the individual level, as opposed to the team or organizational level. For example, we’ve discussed individual human judgment and setting performance expectations for individual roles. But individuals in a conglomerate yield exponentially greater dynamics than individuals in isolation. Put simply, the whole is greater than the sum of its parts. What does that mean for performance evaluations?
While there should be some level of individualization in the manager’s approach and judgment, the flipside of this is looking across the whole and evaluating how the components work together.
Specifically, performance evaluations should be analyzed and calibrated in relation to: the managers who did evaluations, roles, teams, and the organization as a whole.
They should be considered in the context of diversity factors and individual differences, demographic and otherwise. A people analytics program and/or tools built into your performance management system can be helpful here, but what if you don’t have either?
A good place to start is to lay out the performance evaluations according to the managers who did them and teams. As a manager, you can do this yourself within your own group of individuals you manage. For example, look at your evaluations side by side. Did you write similar amounts of comments? Did you rate the individuals similarly in relation to the expectations that were set? Do the constructive and commending comments have a similar style across the reviews?
When you stack-rank the individuals in your group by the overall performance rating they received, would you say the ranking corresponds to how you and others experience this individual’s work?
This ad hoc approach can be helpful, though you can quickly see where quantitative and text analysis could help home in on areas in need of attention with more ease.
Through all of this work, it’s important to not “lose the forest for the trees” though, and to remember that the whole point is to ensure only true performance disparities are the cause of disparities in performance ratings.
A Final Thought on Performance Bias
Performance evaluation is a complex process that is important to handle with care. It has effects on individuals’ perceptions of justice and fairness in their organizations. Assessment of a person’s performance may also impact their livelihoods and experiences of equity, to the extent that performance evaluations are used to influence compensation decisions.
Handled effectively, performance evaluations can be an important tool in the conversation indicating how individuals are doing and where they can contribute in the future. Left to our own humanity, performance judgments can be subject to pitfalls and challenges. As organizations grow, exploring ways to evaluate the effectiveness of their current performance measurement systems and improve upon them will be paramount both for retaining top talent as well as protecting against unintended legal consequences.