The legal pitfalls of using AI at work

With businesses increasing their use of artificial intelligence, Georgia Roberts and Özlem Mehmet outline the risks involved

Credit: Yuichiro Chino/Getty Images

Businesses are increasingly using artificial intelligence (AI) to speed up decision-making and other HR processes, such as recruitment, work allocation, management decisions and dismissals. 

The types of decision-making AI is performing in the workplace include:

  • Profiling: using algorithms to categorise data and find correlations between data sets. This can be used to make predictions about individuals; for example, by collecting data on employees to predict and/or conclude they are not meeting targets, potentially leading to capability proceedings or dismissals. 

  • Automated decision-making (ADM): where AI is used to make a decision, without human intervention. For example, where a job candidate is required to undertake a personality questionnaire as part of a recruitment process and is automatically rejected on the basis of their scoring. 

  • Machine learning: where machines are taught, using algorithms, to imitate intelligent human behaviour. For example, image recognition, which can be used in assessing candidates’ performance in video interviews. 

Discrimination risk 

The use of AI in employment brings the risk of ‘algorithmic discrimination’. As the algorithms behind AI are created by humans, they can reflect the biases of their designers. Employers should ensure their use of AI is not falling foul of the Equality Act 2010, which prohibits discrimination based on protected characteristics. The ‘decisions’ of the AI software and the underlying data used within it should, therefore, be checked to ensure any apparent discrimination is picked up, or mistakes made by the software (such as disregarding relevant factors) corrected.

An example of the danger of biases embedded within AI software is a case involving Uber, which has been subject to an employment tribunal claim brought by one of its former UK drivers on the basis that its facial recognition software did not work as well with people of colour and resulted in the claimant’s account being wrongly deactivated. 

Another example of AI making headlines related to Amazon, which had to scrap its AI recruitment tool after it was found to prefer male over female candidates. This was because the data that the software was analysing to make recruitment decisions was based on previous successful candidates, the majority of whom were male because the tech industry is male dominated. 

In this example, a female candidate could bring an indirect discrimination claim, as the algorithm places her at a substantial disadvantage because of her sex. To defend such a claim, the employer would need to show the use of the technology was a proportionate means of achieving a legitimate aim. While the use of technology to streamline the recruitment process may be a legitimate aim, it is difficult to see how such a tool, which can have such significant implications for a prospective employee, can be a proportionate means of achieving that aim without any human oversight. 

Another issue with AI is where the responsibility for the decisions made lies. For example, it is clear a manager in charge of hiring workers will be subject to anti-discrimination duties in law, but who is held responsible when AI software introduces discrimination into the decision-making process of the employer? The answer to this remains unclear. 

Data protection

Although there is currently no employment legislation preventing or limiting the use of AI in the workplace, data protection law provides some protection for individuals who may find themselves subject to AI decisions.

Article 22 of the UK GDPR contains provisions aimed to protect individuals from both automated decision-making and profiling by limiting the use of such processes and placing safeguarding requirements on organisations seeking to use them. However, in September last year the government signalled it might repeal article 22 as part of its proposed changes to the UK data protection regime, so these protections may wane.

Possible reform

The EU published a proposal for regulation on AI last year, aiming to address the risks of its use, particularly in areas that have a significant impact on individual lives. It suggested that AI involved in employment decisions should be classified as ‘high risk’ and therefore subject to specific safeguards. It is unclear whether such regulation would be mirrored in the UK if enacted in the EU. 

In May 2021, the TUC and the AI Consultancy published a report that put forward wide-ranging suggestions for reform, including changing the burden of proof in discrimination claims that seek to challenge AI or ADM systems in the workplace so the employer will have to disprove discrimination (as opposed to the employee proving it). It also called for statutory guidance on steps that may be taken to avoid discrimination where AI is used.

AI will undoubtedly continue to drive significant innovation in the world of work. The Office for Artificial Intelligence is expected to set out the UK’s national position on regulating AI soon. It will be interesting to see what proposals are put forward relevant to the workplace. In the meantime, employers should remain vigilant to minimise the risk of claims and retain a degree of human involvement in their processes.

Georgia Roberts is an associate and Özlem Mehmet a professional support lawyer in the employment department at Kingsley Napley