Skip to content
Corporate Talent & Inclusion

New study finds AI-enabled anti-Black bias in recruiting

Dawn Zapata  Senior Content Producer / Thomson Reuters

· 5 minute read

Dawn Zapata  Senior Content Producer / Thomson Reuters

· 5 minute read

Too often, the biases that professionals from minority groups experience in the real world are replicated in the AI-enabled algorithms used in training & recruiting

Without human intervention, it is easy for algorithms used in the recruiting process to reproduce bias from the real world, according to a 2019 study conducted by Harvard Business Review (HBR) with professionals from Northeastern University and the University of Southern California.

Since then, it is questionable whether the situation has improved despite the emergence of artificial intelligence as a powerful tool in the evolving 21st century business landscape and its ability to learn and identify trend. A new report entitled The Elephant in AI, produced by Prof. Rangita de Silva de Alwis founder of the AI & Implicit Bias Lab at University of Pennsylvania Carey Law School, looks at employment platforms through the perceptions of 87 black students and professionals coupled with an analysis of 360 online professional profiles with the goal of understanding how AI-powered platforms “reflect, recreate, and reinforce anti-Black bias.”

Key findings from the research

The new report explored a range of AI-related employment processes from job searches, online networking opportunities, and electronic resume submission platforms. More specifically, key findings include:

      • In an analysis of job board recommendations of those surveyed, 40% of respondents noted that they had experienced recommendations based upon their identities, rather than their qualifications. Moreover, 30% also noted that the job alerts they had received were below their current skill level.
      • Almost two-thirds (63%) of respondents noted that academic recommendations made by the platforms were lower than their current academic achievements. This was a finding of particular disappointment as the survey highlights the fact that black women are the most educated group in America.

For the most part, Silicon Valley is still prominently populated by white people, with men comprising the majority of leadership positions. It begs the question of how the technology industry can create fair and balanced AI for the masses if there are still diversity challenges within the very teams designing and implementing the algorithms upon which that AI relies. In fact, Amazon scrapped a recruiting tool in 2018 because of such bias.

Further, a 2019 study from the U.S. National Institute of Standards and Technology that examined 189 facial recognition algorithms from 99 different developers found that a majority falsely identified non-white faces. Although commonly used by both federal and state governments, facial recognition has raised concerns over AI-enabled bias and have led agencies such as the Boston and San Francisco police departments to ban its use.

AI-enabled biases in recruiting & testing

As with the case of facial recognition, long-known hiring discrimination processes are often increasingly AI-enabled. The UPenn report notes that Black professionals in today’s employment marketplace continue to receive 30% to 50% less job call-backs when their resumes contain information tied to their racial or ethnic identity.

With AI being developed as an employment tool meant to help provide equality of opportunities, the survey asked about employers empowered with AI-based recruiting technologies and whether candidates feared being not considered for employment by those employers. Less than 10% said it would little cause of worry, yet more than 20% said that it’d be of considerable worry to them. The report expands on hiring discrimination by exploring potential biases incorporated within pre-programmed “expected responses”, with researchers pointing out that these responses point to potential data inequity.

Other inequities centered around skill-based tests questions programmed into hiring platforms that have been known to be biased. Such questions, built upon exams such as the Law School Admission Test (LSAT), are creating unfair screening assessments. And given the amount of research around these biases in standardized testing, using legacy assessment models continue to inhibit the success of Black and other minority groups from advancing in employment hiring pools.

In considering the use of AI platforms by employers, the report points both to the technical complexity of the AI behind the platforms as well as the limited knowledge of those in human resources or other hiring roles in understanding such complexities.

Actions for employers & developers

Until there are industry-wide best practices, the responsibility to ensure that the AI algorithms being used to promote equity falls upon the vendors that are building the tools and the employers that use them. According to the 2019 HBR study, employers using AI-enabled recruiting tools should analyze their entire recruiting pipeline — from attraction to on-boarding — in order to “detect places where latent bias lurks or emerges anew.”

Prof. de Silva de Alwis calls for diverse teams to develop less biased models and algorithms and advises employers of software developers to leverage tools that will minimize biases, such as Microsoft’s Fairlearn, an open source toolkit that empowers data scientists and developers to assess and improve the fairness of their AI systems. InterpretML, also a Microsoft creation, is another tool that helps AI-model developers assess their model’s behavior and de-bias their data.

Employers should also use a “second look” at the resumes and CVs of underrepresented minorities to mitigate the biases that run the risk of being reproduced on a vast scale by AI-led recruitment platforms, says Eric Rosenblum, Managing Partner at Tsingyuan Ventures, the largest Silicon Valley venture capital fund for Chinese diaspora innovators.

More insights