Increasingly, employers are using or exploring the use of artificial intelligence (AI) in recruitment or an employment context.
AI can help make recruitment processes more efficient, better identify the best talent applying for positions and to try to eliminate – or even just minimise – potential for human biases in the recruitment process.
AI tools can be used to perform sifts of CVs and application forms, search prospective employees’ social media for key phrases or terms and schedule appointments and interviews.
Similarly, it can be used to analyse the tone of voice or facial movement during interviews or perform automatic filtering of candidates through online assessments and tests. But its use is not without risks.
A recent report from the TUC, published jointly with the AI Consultancy in May 2021, reignited the debate over how AI can be used in a way that minimises the legal risks, and what legal reforms may be required to support this.
Burness Paull summarised ten main points for employers to be aware of when using artificial intelligence in recruitment.
All you need to know about artificial intelligence in recruitment
AI can be time and cost-effective
The main benefit to using artificial intelligence in recruitment is that it can make the process and other HR functions much more time-efficient, which can in turn proves cost-effective. Screening CVs and applications for a role with thousands of applicants might take a person weeks, if not months.
AI tools can assist with these time consuming, and more mundane, tasks.
AI has the potential to remove human bias
AI can potentially remove elements of human bias from the process by helping to standardise aspects of the recruitment process and other HR functions. Fundamentally, it removes individual discretion from the decision-making process.
For example, having an AI tool sift applications rather than a manager, reduces any risk of that manager harbouring, say, racist or sexist views, and of those affecting their recruitment decision making.
AI can also conduct sentiment analysis on, for example, job ads or descriptions, to ensure the language has no hidden bias. However, AI cannot eliminate bias completely (see below).
But AI also has the potential to perpetuate discrimination…
Whilst on the face of it, the use of AI is unlikely to directly discriminate on the grounds of a protected characteristic, the main concern is the potential for AI to be classed as a “provision, criterion or practice” within the meaning of the indirect discrimination provisions of the Equality Act 2010.
Put simply, indirect discrimination is where an employer applies a PCP or a policy to everyone but which more adversely affects people with a particular characteristic.
- Edinburgh Uni Startups raise £6.27m on new AI acceleration programme
- Is Apple’s plan to catch child abuse images a surveillance threat?
- Footballers facing a torrent of racist, homophobic abuse on social media
The concern with AI is that any algorithm(s) upon which it is based could be deemed to be a PCP for the purpose of indirect discrimination.
So a straightforward if hopefully unlikely example would be a computer program that sifts CVs to identify only candidates who are more than 5 foot 8 inches tall – that could be said to be the use of a PCP which is indirectly discriminatory against women.
Biased data makes for a biased algorithm
AI tools are only as good as the data they are fed – if the data set is biased, the algorithm will likely be biased too. A high profile example was Amazon’s attempt to build a CV screening algorithm. Using Amazon’s recruitment data from the past decade, the algorithm taught itself that male candidates were preferable to female, because Amazon’s previous recruitment decisions had been subject to bias.
The lack of human touch
AI lacks any kind of common sense, compassion or empathetic touch. This can lead to irrational results, such as failing to authorise a holiday request made due to particular personal circumstances, or an indirectly discriminatory algorithm.
By way of example, the Bologna Court in Filcams Cgil Bologna and others v. Deliveroo ITALIA S.R.L, 59 decided that an app used by the Italian Deliveroo company was indirectly discriminatory.
The system treated equally all data inputs in relation to the willingness of riders to work generally and at the busiest times. This might seem sensible, but it failed to take into account any good reasons for late cancellations or inabilities to work, such as childcare or illness, and this disproportionately affected women who tend to bear caring responsibilities.
‘Black Box’ problem
What actually is AI, and how does it work? The problem is that most employees probably won’t know, and so there is a lack of transparency around how decisions are made, which is sometimes called the ‘black-box’ problem.
How does your workforce feel about the use of AI?
Another TUC report on AI highlighted that many employees feel uneasy about the use of AI by their employers to make employment-related decisions, with only 28% feeling comfortable with technology being used to make decisions about them at work. As an employer, how will you manage this apprehension?
Under the UK GDPR, in relation to data processing by AI there is currently no obligation to provide meaningful information if the processing is necessary for the performance of the employment contract or has human involvement in the decision-making. This exemption has been widely criticised.
The EU’s regulations for reform
The EU has published regulations for harmonised rules for the safe use and development of AI, the first to do so globally. It classes AI systems involved in employment as ‘high risk’ and therefore subject to particular safeguards.
As the UK has left the EU, the regulations will not be binding here, but any UK companies using AI in the EU will be subject to the regulations in force there.
Whilst the EU is legislating in Europe, it is likely the UK will see reform in this area in the near future too. The recent TUC Report is just one paper, following a line of others, which calls for legislative reform and increased regulation – and it is unlikely to be the last. This is an area to keep an eye on.
What does this mean for employers?
Employers need to know what AI tools they are using and be aware of any unintended consequences of their use. The extent to which employers bear legal responsibility for any discriminatory acts arising from AI systems remains to be fully tested but employers should be aware of such risks.
Consideration should therefore be given as to whether such tools are actually necessary, and any impact they might be having on the fairness of decision-making. Employees and workers should also be kept informed of how decisions are made about them.
If you’d like to know more about using AI in your recruitment process and your responsibilities as an employer, please do get in touch and we’d be delighted to talk you through things in more detail.