While many people might be happy to have AI make suggestions around which film to watch next, or to recommend other music they may enjoy, handing it responsibility for the selection process in their next job application can raise understandable concerns.
In this blog post, we take a closer look at the pitfalls and the potential of using AI in recruitment, and where it might add value rather than systemized bias.
There’s no denying that in the modern world, AI has a lot of great uses. Across a range of industries, it’s being increasingly used to streamline workflows and cut costs. In fact, in the recruitment world, AI — and more specifically machine learning — has been used in various forms for some time now.
Common applications include automated interviews, resume (or CV) screening, as well as administrative duties, such as identifying duplicate candidate records. But the truth is, there’s a lot we don’t know about how AI is being used in recruitment. And that’s a concern.
For the recruitment business, where AI can really shine is in crawling talent pools for potential candidates based on specific job criteria such as qualifications and experience. It can also learn broadly what kind of candidate is a good fit for a particular role. However, its real potential benefits come from automating some of the repetitive and time-consuming work which goes along with screening, assessing, and scheduling.
Even as far back as 2017, studies showed that hiring decision-makers saw the cost and time saving potential of AI, alongside its ability to deliver better candidate matches. Interestingly, 43% of decision makers also believed that it would remove human bias. And that’s exactly where AI has come unstuck: when it’s been used to replace human decision-making in an effort to speed up the hiring process.
When we talk about AI in recruitment, we really mean machine learning — computer systems which can learn from statistical models and data sets. As Amazon discovered in 2015, using historical human data to inform AI decision models can come at a cost to both potential candidates and their future employers. Their experimental hiring tool which made use of machine learning was informed by ten years of resumes submitted to the company.
The issue was that the computer system was essentially given a decade of data revealing male dominance in the tech industry and adapted its algorithms accordingly. The result? Resumes containing the word ‘women’ were automatically downgraded. When this ‘algorithmic bias’ was discovered, interventions were made to ensure the system became gender-neutral, but it left lingering concerns that future unintended discrimination might occur, raising important questions around how to ensure that recruitment AI is transparent and fair.
In fact, transparency over the use of AI in recruitment is one of the biggest challenges currently facing the industry. That’s where new legislation is playing a crucial role.
A game changer in terms of the use of AI in recruitment, New York City’s AI employment law restricts the use of ‘automated employment decision tools’ and requires recruiters to undergo an annual bias audit to alleviate potential bias in their systems.
Recruiters will also have to be transparent about where AI has been used in the recruitment process, and to even provide candidates with alternative options for processing their applications. Undisclosed or biased use of AI in the hiring process will also be subjected to penalty fines.
Taking effect from January, 1 2023, New York’s new law blazes the trail for further legislation to tackle the inherent bias and potential discrimination generated by machine learning algorithms.
Similar conversations are taking place in the European Union, where the AI Act is set to institute self-certification programs and government oversight. This law will create transparency requirements for AI systems that interact with people, and will attempt to ban a few “unacceptable” qualities of AI systems. Individuals or companies located within the European Union, placing an AI system on the market in the European Union, or using an AI system within the European Union would be subject to the regulation.
It’s easy to be dismissive of AI in recruitment. After all, it’s a people business. But there are some clear use cases where it can make trustworthy and time-saving improvements to the placement process. AI can also be trained to trawl records and match specific criteria. For more common roles requiring limited experience or technical skills there is perhaps wider scope for AI to assist with search and selection. The real danger arises when AI trained on imperfect data, and therefore inherently biased, is used for key decision-making.
Conceivably, AI could be used to handle the entire recruitment process — which perhaps less scrupulous recruiters might find appealing. But for most recruiters there’s a sense of professional pride, and a human aspect to hiring — building relationships with employers and candidates — which AI simply can’t replace.
As an example, there could be three equally well-qualified candidates in the running for an open position. Their education, qualifications and experience all fit the bill, but what about their cultural fit? A good recruiter understands the importance of aligning the employer’s value proposition with the right candidates. AI can easily miss the subtle verbal and non-verbal nuances which influence human-decision making. Humans can also see beyond the data. Especially when you’re looking for someone with transferable skills from outside a particular industry. Good recruiters know that instinct, as much as anything else, comes into play when making the right hire.
There’s also the candidate’s perspective to consider. They may have a (sometimes valid) suspicion of any technology used to influence the hiring process. For example, even though Applicant Tracking Systems (ATS) have been around for decades, you can still find plenty of blog posts and guides for ‘how to beat the ATS’. Fears run even deeper with undisclosed uses of AI, particularly when it comes to diversity and inclusion in the workplace. As the Amazon experiment revealed, machine learning opens up the potential for discriminatory algorithms, and runs the inherent risk of excluding candidates you really wanted, over the ones it thinks you want.
Although there are some tasks and activities which can be comfortably replaced by AI, when it comes to key decision-making, sourcing the right talent and identifying the best-fit candidate is best left in the hands of humans. But how can you speed up the time to hire without using AI?
Recruitment automations can take care of repetitive, and repeatable everyday tasks, and give recruiters more time to spend time with candidates and employers. It’s particularly helpful for keeping all parties updated through emails, form letters, and notifications, cutting down the time needed to manually create correspondence.
PCRecruiter is an ATS / CRM hybrid which offers recruiters a huge range of opportunities to automate certain aspects of their recruitment process, making workflows more streamlined and cost-effective. Tailored to specific workflows, it’s a seamless, customizable and powerful tool for handling your end-to-end talent sourcing and.
Want to learn more about the benefits of using automation in recruitment? Read our blog post 5 Ways To Use Automation In Recruitment.
PCR 9 is getting a few end-of-year enhancements in this week’s release, including some stylistic updates as well as updates and improvements to our integration with SEEK.
Read moreAs mail service providers use increasingly sophisticated tools, including AI, to detect suspicious activity, and Microsoft begins placing tighter restrictions on Exchange and 365 accounts, recruiters need to be well-informed on how to reach their intended recipients without breaking the rules.
Read moreFor the 26th year, PCRecruiter extends our holiday wishes to customers, team members, and the worldwide recruiting community.
Read moreFind out more about who we and what we do.