To get our content delivered to your inbox, subscribe to our newsletter here. It’s free!
Just a blink ago, using artificial intelligence (AI) in our daily lives felt like science fiction. With the rise of generative AI tools like ChatGPT and Midjourney, it’s reality—and the business world is poised for a revolution.
AI has become a staple in almost all business workflows, with 43% of workers already using AI tools. The number is even higher for human resources and recruiting professionals, as 88% of global companies use some form of AI in their HR programs. Organizations that aren’t already a part of this statistic may soon join the club: the AI recruitment software market is projected to reach $942.3 million by 2030.
This rapid adoption promises exciting possibilities, not just for efficiency (early users saved 75% per candidate screen!), but also for building a more diverse and equitable workforce. Over 65% of talent acquisition professionals believe AI can remove human bias, opening doors for talent from all backgrounds. On paper, AI offers the perfect tool for building modern recruitment programs with Diversity, Equity and Inclusion (DEI) embedded in every step.
Still, the intersection of AI and DEI remains complex in the future of recruitment. We’ll dive into the complexities of this relationship and explore how to harness AI’s potential for a fairer, more inclusive recruitment landscape.
How AI Can Support Diversity and Inclusion in Recruitment
Widespread layoffs have affected various sectors, driven by increasing economic uncertainty. The tech industry took the hardest hit, closely followed by retail and healthcare. This raises critical concerns about the impact on diverse talent, as Forbes warns of heightened vulnerability for women and minorities in the labor market during periods of economic instability.
To add fuel to the fire, DEI teams are often the first casualties when reductions in force occur. Research from Revelio Labs found that over 300 DEI professionals were laid off at companies that downsized in 2022. This same notion applies to upcoming hiring needs as well: according to NPR, DEI job postings saw a 40% decline over the past year.
In response to budget cuts and staff reductions, companies are increasingly using AI in recruitment to sustain DEI efforts. Machine learning (ML) algorithms offer the alluring prospect of streamlining candidate selection, objectively evaluating skills and potentially mitigating human biases. However, stakeholders remain concerned about its potential for algorithmic bias and AI’s limitations in capturing intangible qualities like cultural fit.
It’s not all doom and gloom, though. AI tools have shown that they’re capable of tapping into hidden talent pools by tackling barriers faced by underrepresented groups. And they’re even fixing the problem head-on.
- Maersk Tankers’ recruitment data exposed a stark reality: a vast gender gap in their talent pool. Determined to bridge this divide, they turned to Develop Diverse’s AI solution. The technology’s bias detection and inclusive language rewriting capabilities revamped their job descriptions by removing gendered language. This change significantly increased the gender diversity of their talent pool: over 50% of new hires in the following year were women.
However, it’s important to address the ethical implications and potential risks associated with the use of AI in talent acquisition. While AI offers promising solutions for diversifying recruitment, irresponsible implementation can amplify harmful biases and hinder progress.
Challenges and Considerations of AI-Powered Talen Acquisition
The marriage of AI and modern TA seems like an employment equity dream come true. Its ability to remove bias-triggering identifiers and quickly scan resumes holds immense promise for mitigating unconscious bias in recruitment. In a perfect world, this type of blind resume evaluation should promote workplace diversity by increasing the volume of qualified applicants. After all, talent acquisition and hiring managers would then only be able to evaluate resumes based on skills and accomplishments.
Still, solely relying on AI for blind resume evaluation is a risky gamble. Algorithmic bias remains a threat and human judgment is still essential for a holistic assessment. It’s worth noting that responsible AI recruitment companies (like Hired) perform independent AI audits to hold their algorithms accountable.
Despite these preventative measures, algorithmic bias can still lurk in the code. AI systems are only as good as the data fed into them. If your data is already biased, AI not only reflects those biases in its decisions but automates and magnifies those existing preconceived notions.
Nihar Shah, a machine learning professor at Carnegie Mellon University, points out that there’s still a considerable amount of effort needed to achieve complete fairness in the algorithms utilized by AI recruitment software. “How to ensure that the algorithm is fair, how to make sure the algorithm is really interpretable and explainable—that’s still quite far off.”
Take this example from Amazon:
- In 2014, Amazon’s AI research team secretly built a recruiting machine learning model to automate their search for top talent. It reviewed resumes of applicants for TA professionals, having been trained to vet applicants by analyzing specific patterns in resumes submitted to the company over a 10-year period.
However, the ML model absorbed biases from historical hiring practices, mainly categorizing men as top or ideal candidates. This mirrored the prevailing male dominance in the tech sector, resulting in Amazon’s workforce being over 50% male. Despite making changes in an attempt to correct these errors, Amazon eventually phased out the program in 2018.
Human Oversight and Accountability
Accountability is a fundamental aspect of any business decisions, but AI’s potential consequences make it even more critical. Human oversight plays a pivotal role in maintaining effective and ethical decision-making while using AI.
Although AI tools can help TA professionals save time and streamline resume screening, human input is essential because it can correct any biased or erroneous recommendations an algorithm might make. Humans also have additional context about candidates that AI just isn’t capable of considering.
For example, AI can filter out “hidden workers” who are qualified candidates that are often disqualified during the application process due to factors like large resume gaps. A report by Harvard Business School and Accenture found that automated hiring systems filtered out an estimated 27 million candidates from finding full-time work.
Still skeptical? Here’s the TL;DR on an experiment conducted by Eurovision, a Belgium-based news network:
- Eurovision used Jobscan, an AI-powered resume screening software, on an employee’s resume. They asked the algorithm to evaluate his suitability for an actual job opening. Shockingly, the employee received a low ranking because the ATS screener failed to recognize his international experience, despite having lived in five different countries. The AI struggled to identify instances of expatriation and could only recognize travel as international experience.
“Human expertise and intuition play a vital role in understanding nuanced qualities that cannot be easily measured by algorithms,” says Hemanandini Deori, Co-Founder at VProPle, a technical interview service platform. “While AI brings undeniable advantages to the recruitment process, it is crucial to maintain a balance between automation and human intervention.”
Upholding ethical standards in AI for recruitment involves placing a strong emphasis on fairness, transparency and accountability. Tackling algorithmic bias is key to ensuring all candidates have an equal and equitable shot at landing a role.
While employers strive to minimize bias in AI talent acquisition, the lack of diversity in the AI field itself can pose a significant challenge. Reports from AI Now Institute and McKinsey demonstrate a low percentage of women and minority professionals within the AI development space. Addressing this imbalance may be crucial for mitigating bias in AI recruitment algorithms.
Safeguarding candidate privacy is paramount with AI talent acquisition tools, which rely on data collection to train and improve their algorithms. Respecting data privacy through robust security measures and responsible data practices ensures trust, protects individuals and fuels ethical AI usage that empowers fair and equitable hiring.
3 Best Practices for Using AI in Diversity Talent Acquisition
Use AI for Augmentation, Not Replacement
AI should act as a companion to human decision-making, not a substitute. Understand the right times and places to use AI tools and try to avoid overreliance. Remember that AI doesn’t have innate human knowledge or context, so use your best judgment and always let humans make the final decisions when interpreting AI outputs.
Set your workforce up for success when introducing new AI tools into your hiring workflow. Train your TA managers on how to properly use AI tools in the recruiting efforts—offer specific resources on DEI recruitment and recruitment AI best practices.
Build AI Transparency and Accountability
Upholding ethical AI usage in talent acquisition requires employers to adopt AI transparency, while also holding algorithms accountable when generating unfair and biased outcomes. Transparency fosters candidate trust with AI tools, improves algorithms and supports compliance standards.
AI transparency relies on three core principles:
- Ease of explainability: How easily can we explain how the algorithm works on the inside?
- System governance: Are there proper processes and ample documentation for key decisions?
- Open disclosure of results: Do we openly and clearly share what the algorithms can do and why they’re used?
Maintaining AI transparency in recruitment showcases your company’s commitment to its values and ethics, even before candidates officially join the team.
You can demonstrate this to candidates in a couple of ways:
- Inform candidates when AI tools are used during the hiring process
- Ask job applicants for consent before using AI to check their qualifications
- Explain what AI tools are used and how they’ll analyze data
While implementing transparent AI practices can feel like a daunting task, it actually fosters trust and fairness with your stakeholders by shedding light on the decision-making process. Applicants understand why they are selected or rejected. This also gives them a clearer understanding of how their applications are processed.
Monitor and Seek Feedback
Finally, seek input on your existing DEI hiring practices to explore how AI might enhance your ongoing efforts. As AI becomes part of your recruitment marketing strategy, actively seek candidate feedback via surveys or post-interview conversations. Encourage your TA team to ask candidates about their thoughts on AI usage during the hiring process.
Incorporating feedback in your ongoing efforts enables you to refine your DEI strategy and fine-tune the use of AI tools. Your company’s commitment to fairness is evident, nurturing trust and transparency with your talent pool.
Amplify DEI Efforts with AI for Talent Acquisition
In the changing world of TA tech, the relationship between AI and DEI in recruitment has become increasingly complex.
AI can be a powerful tool for streamlining tasks and finding top talent. It also demands the highest level of responsibility and ethical awareness. When used carefully and correctly, artificial intelligence has the power to build more inclusive and equitable workforces at scale.
Remember, AI is a collaborative tool that works best when humans are in the loop. Before diving headfirst into the world of AI for talent acqusition, take the time to do your research, fully understand technology solutions and make sure the tools you choose fit your company’s specific needs and values.