Bias is everywhere. Nearly two-thirds of respondents in Deloitte’s 2019 report on the state of inclusion reported experiencing bias in the workplace last year. And the sobering statistics continue from there: respondents reported that bias had negative impacts on productivity (68 percent), engagement (70 percent), and on happiness, confidence, and wellbeing (84 percent).1
As humans, we can hold a variety of unconscious biases. Many are necessary to daily life, almost intuitive; some are less productive and are holdovers from the past, no longer relevant. We tend to favor people most like ourselves (similarity bias). We often prefer information that confirms our beliefs and are prone to discount information that contradicts them (confirmation bias). We also can put greater emphasis on things that have just happened (recency bias). These and other types of biases can unconsciously influence our decision-making: we may inadvertently hire or promote those most like us, make talent selections that align with our preconceived notions, and base our performance evaluations on what we expect to see or have seen most recently.
Organizations are increasingly recognizing that humans are biologically hardwired to operate on instinct and habit and are seeking nonhuman solutions to mitigate outmoded and problematic biases. For instance, the use of artificial intelligence (AI) in recruitment alone is expected to increase threefold over the next two years.2
Using AI to Help Reduce Bias across HR
AI is not new, but it has been making increasingly interesting strides into talent acquisition, internal mobility, learning and development, and performance management. Some common use cases of AI include:
- Revising job postings to use gender-neutral language
- Anonymizing resume information (e.g., names, photos, gender, schools, ZIP codes, graduation dates) to reduce reviewer bias
- Using gamification to assess abilities beyond resume text and match applicants to their best-suited roles
- Providing real-time performance metrics to nudge more frequent feedback, transparency, and learning recommendations
However, AI is not without its own challenges. The algorithms that drive AI (including the parameters for machine learning applications) are created by humans—and humans have unconscious biases. Until we reach the technology singularity, at which point AI will program itself (we’ll save that prediction for a future year), this means that AI is also subject to bias.
For example, if your company is currently made up of mostly Caucasian males over 40 years of age, and the talent acquisition AI tool is establishing correlations using only this data set to bring in more high performers, then it should be no surprise that the result will be more Caucasian males over 40. Clearly, a more thoughtful approach to “programming” the AI is required in order to identify and bring on a more diverse talent pool.
Many organizations are aware of AI’s flaws and are taking steps to address them. For example, several leading technology companies have announced their use of open-source software tools that can be used to examine bias and fairness in AI models.3 Furthermore, there is a growing number of AI auditing firms emerging to help address these issues.
Combining AI with Behavioral Science
AI can provide humans with powerful tools to reduce unconscious bias, but in turn, humans need to design AI with fairness standards in mind and routinely monitor and test algorithms to ensure they do not favor or disadvantage any particular group. In this way we can use human judgment, aided by AI, to reduce both our unconscious biases and inadvertent machine-learning biases.
Of course, even when work is augmented by AI, many decisions will still fall to humans—who are prone to cognitive shortcuts. But we can take this another step forward: behavioral science can help create environments and offer choices that encourage better decision-making.
For example, a hiring manager or recruiter may show similarity bias in reviewing a resume. A resume-masking AI tool could be used to anonymize demographic details in order to reduce bias and nudge the resume reviewer to focus on the most critical job-related aspects. The intent is not to rely on biased shortcuts or “trick” people into one decision or another but rather to nudge them to consider the most pertinent factors.
Considerations for Mitigating Bias in Your Organization
To get started:
- Examine your end-to-end talent life cycle to identify the areas most prone to bias (e.g., decisions on resume screenings, interviewing, selection, performance management, or internal mobility).
- Explore AI/data science solutions while designing for fairness to reduce the bulk areas of potential bias (e.g., identifying processes or tasks that can be automated).
- Determine behavioral science opportunities to nudge decision-makers at the right times with the right information to inform decisions (e.g., examining a full review period rather than only recent actions when measuring performance, evaluating ability test results to supplement resumes when selecting candidates for interviews, or showing candidate details as a group instead of one by one to compare to the desired fit).
- Keep in mind that, for humans, a bias issue can be seen as a learning issue: Think, for example, how we all learned to drive. We start the learning process at an “unconscious incompetence” level and move on to “conscious incompetence,” then to “conscious competence,” and finally, through learning and practice, arrive at “unconscious competence.” Similarly, we can think of the journey from bias to inclusion in the same way, starting with “unconscious bias,” moving to “conscious bias” (uncomfortable), then through learning to “conscious inclusion,” and finally through practice and more learning to “unconscious inclusion” and new business-as-usual inclusive behaviors. AI, nudging, and behavioral science tools can help us get there.
The combination of AI and behavioral science will be on the rise in 2020. An increased number of AI tools will continue to emerge, and organizations will become more familiar with behavioral science tools and nudges to help their people make better and more informed talent decisions.
Bersin will continue to explore the topics of bias and the impact of AI and behavioral science through 2020 with research in areas such as nudging and AI for inclusion, people analytics for the individual, the diversity and inclusion solution provider market, and our next High-Impact People Analytics study.
Zachary Toof is a research manager, People Analytics, at BersinTM, Deloitte Consulting LLP.
Nehal Nangia is a research manager, Talent and Workforce Performance, at BersinTM, Deloitte Consulting LLP.
Janet Clarey is lead advisor, Technology, Analytics & Diversity & Inclusion, at BersinTM, Deloitte Consulting LLP.
1 The bias barrier: Allyships, inclusion, and everyday behaviors, Deloitte Development LLP, 2019, https://www2.deloitte.com/content/dam/Deloitte/us/Documents/about-deloitte/us-inclusion-survey-research-the-bias-barrier.pdf.
2 The 2019 State of Artificial Intelligence in Talent Acquisition, HR Research Institute, 2019, https://www.oracle.com/a/ocom/docs/artificial-intelligence-in-talent-acquisition.pdf?elqTrackId=1279a8827f3d4548ae3f966beeeef458&elqaid=83148&elqat=2.
3“Artificial Intelligence Can Reinforce Bias, Cloud Giants Announce Tools For AI Fairness,” Forbes.com / Paul Teich, September 24, 2018, https://www.forbes.com/sites/paulteich/2018/09/24/artificial-intelligence-can-reinforce-bias-cloud-giants-announce-tools-for-ai-fairness/#332c72fd9d21.