NEWS & INSIGHTS

AI in recruitment and potential for bias

JMW Solicitors LLP

There are many benefits to using Artificial Intelligence (AI) tools in recruitment processes, to automate filtering processes and promote efficiency, scalability and consistency. AI in recruitment is on the rise, with a November 2023 survey by the Institute of Student Employers finding that nearly a third of employers are now using AI as part of their hiring process.

AI technologies are being used across every stage of the recruitment process, including:

  • Sourcing,
  • Screening,
  • Interview, and

The risks

At each stage there are a number of risks. These include the potential for discrimination and bias against applicants, as well as a risk of exclusion for applicants who may not have access to technology due to age, disability, socio-economic status or religion.

The risk of discrimination and bias is hard to ignore. Back in 2018, the BBC reported that an AI recruitment tool used by Amazon was being scrapped as it was found to discriminate based on sex. It had been built on data accumulated from CVs submitted to the firm mostly from men, and it had started to penalise CVs which included the word “women”.

In a different company, one candidate reported by the BBC had initially been screened out of an application. They then submitted the same application with a different birthdate, which made them younger. With this change, they passed the screening process and proceeded to interview.

At a US company, an AI screener was found to be giving candidates extra marks if they listed typically male hobbies such as baseball and basketball. Contrarily, the screener awarded lower marks to typically female sports such as softball.

A further BBC article reported that AI software was being used to analyse candidates’ voice and body language, and make assumptions on whether they had certain job-related characteristics on the basis of this analysis. Such tools have been found to have no scientific basis. What they do, however, is score differently based on gender and race, as well as score lower if an applicant is wearing certain clothing including scarfs, for example. This has a huge risk for discrimination against those who wear religious dress. Scores even changed based on the brightness level of the camera. The potential disadvantage for those in minority groups due to race, religion, age, sex, and disability is clear.

Notwithstanding the risk of unlawful discrimination, there is a further risk in using AI to filter applicants, in that it may result in choosing the same type of candidate every time. This would lead to a lack of diversity in the workforce. Diversity is known to give companies a competitive edge for many reasons, with benefits including staff retention, innovation and creativity, striving to achieve high standards, high levels of motivation, and a positive employer reputation that will attract industry talent. Businesses will also risk missing out on excellent candidates due to minor discrepancies in their applications.

This was well illustrated in a news story published by the Economic Times in September 2024. In brief, a manager had become concerned when his HR department was unable to find qualified candidates over a three-month period. He was repeatedly told that the candidates had not passed the initial screening process. In order to investigate, he submitted a slightly modified version of his own CV under a different name, which was also rejected.

He found that the screening process used an AI tool which was automatically rejecting all candidates who lacked experience in a particular programming language – which happened to be outdated. This highlights how the use of AI in filtering applications risks overlooking qualified, suitable, and potentially excellent candidates because of very minor discrepancies in their applications.

This begs the question: is human judgement replaceable?

The law

Readers will be aware that the Equality Act 2010 provides for protection against discrimination in the workplace. Discrimination includes any less favourable treatment that puts someone with a protected characteristic at a disadvantage, compared to someone who does not have that characteristic.

Protected characteristics include age, sex, gender reassignment, disability, race, religion, marriage and civil partnership, pregnancy and maternity, and sexual orientation. In relation to the protected characteristic of disability, employers also have a duty to provide reasonable adjustments to remove barriers to working.

Employers must follow the law on discrimination when recruiting. This includes advertising, screening, interviewing, and ultimately decisions on which candidates to recruit. Employers are prohibited from both direct and indirect discrimination.

In addition, article 5(1)(a) of the UK GDPR requires that employers must process personal data “lawfully, fairly and in a transparent manner”.

Recommendations

Recruiters should not be discouraged from harnessing the advantages available to them by using AI. They should, however, be live to the legal and commercial risks when doing so and take steps to ensure compliance.

The ICO has published a report setting out a number of recommendations for recruiters who use AI as part of their screening process.

Recruiters who use AI tools in recruitment must complete a Data Protection Impact Assessment (DPIA) to outline their process for identifying and minimising data protection risks.

Recruiters should ensure that AI software is tested for potential fairness, accuracy and bias issues prior to launch. This should then be monitored regularly, and appropriate action should be taken to address any issues, including human involvement if necessary. Recruiters should be able to demonstrate that their AI tools are processing personal information fairly.

Recruiters should ensure that the AI providers they use are regularly testing the software for potential bias. Most providers use a ‘four fifths rule’ as a minimum threshold. This means the selection rate for any group must be at least four fifths or 80% of the selection rate of the group with the highest rate. This should be done at regular intervals and certainly before launching any changes. Measures should then be put in place to mitigate any issues. Risks and mitigation should also be recorded in the DPIA.

Recruiters can request the test results, reports and evidence of actions AI providers have taken to address fairness, accuracy, or bias issues in AI tools or outputs. They should check these thoroughly and ensure they demonstrate that the AI software is operating fairly and not discriminating against minority groups.

Recruiting managers would be advised not to use AI outputs solely to make automated recruitment decisions and instead ensure that there is a level of human input. Where decisions are automated, recruiters should ensure that candidates are informed of this, and have a simple way to object to or challenge these decisions.

The government has made a number of recommendations for responsible AI in recruitment. This includes bias auditing which should be repeated at regular intervals to ensure consistent performance.

The government also recommends that employers make reasonable adjustments where necessary. It might be possible to provide reasonable adjustments to enable a person with protected characteristics to use the AI recruitment tool without being disadvantaged. For example, text-to-speech software would enable a candidate with a visual impairment to use a chatbot. However, where adjustments are not possible, the AI software must be removed from the recruitment process and the employer will need to consider what they need to do to allow the person with protected characteristics to participate on an equal basis to others.

Overall, the risks of using AI in recruitment are clear. Employers should ensure they comply with the legislation and guidance in order to avoid a reduction in competitive advantage, inefficient hiring strategies, and missing out on excellent candidates, as well as the potential for costly legal proceedings for discrimination.

About the author