Can you use artificial intelligence to sack workers?

Following claims by three women that they were unfairly dismissed by an algorithm, Malcolm Gregory and Catherine Hawkes outline the risks of using AI in the workplace

AI is now very much a feature of the workplace, but HR specialists need to be alive to the risks it presents. Unintentional consequences of AI decision-making can include bias, discrimination, unfair treatment and an inability for employees to fully understand and accept decisions. Failure to mitigate these risks can lead to employment tribunal claims being brought and reputational risk to an employer. 

One such issue has recently arisen for MAC, a subsidiary of Estée Lauder in which three make-up artists brought claims that they had been unfairly made redundant because of an automated decision generated by AI software, HireVue.

The three women were informed that they had to reapply for their positions by way of video interview. The AI software analysed the content of the women’s answers and their facial expressions, together with other metric data regarding their job performance. Following the video interview, all three women were informed they were being made redundant, in part due to the algorithmic decision-making (ADM).  

The women complained of the lack of transparency and the fact that when challenged, their employer was unable to explain on what basis they were being made redundant. The company defended its decision, citing the fact that the algorithm assessment only accounted for a quarter of the marks awarded and that the AI software was used in tandem with human decision-making procedures, which overall, produced a fairer outcome. The women received an out-of-court settlement.

Under the GDPR, the law allows employers to make decisions without human intervention, although these are in limited circumstances and require transparency. The above case highlights the risks associated with ADM and how this can lead to serious consequences in an employment context if greater transparency is not provided. 

HR teams need to be alive to the potentially unintended consequences of any automated systems they use. It is important to maintain a human element during any process to counteract problems with bias or bad decisions. Quality control of AI is arguably more important than in other areas given its purpose of learning from the data and its own decisions. If the data input is defective and there is no mechanism for review, an employer could easily experience significant operational, legal, and reputational problems.

When using ADM as part of a process to make staff redundant, HR teams should consider:

  • Engaging in fact-based comparisons between AI results and human decision results. For example, running the algorithm against the decisions of a human on the same matter to compare results. If bias is discovered, the algorithm should be changed. 

  • Prioritise transparency by clearly communicating how AI is being used in the workplace. This is particularly important with regards to data protection law, so plain English and non-technical explanations should be used. Greater transparency and understanding by the HR team will facilitate employees being able to raise queries or challenges. 

  • Ensure there are informal and formal routes for employees to raise queries or challenge decisions.

AI may not yet be robust or reliable enough to make the case for its use in complex HR work, yet the promise of time saved, and its use for mundane work is accelerating AI in the workplace. The Trades Union Congress (TUC) has argued for changes in the law on the use of AI at work, in its report, Technology managing people – the legal implications. It is important for HR teams to stay up to date with any legislative changes in this area and ensure any AI process used is subject to regular quality control and review.

Malcolm Gregory is a partner and Catherine Hawkes an associate in the employment law team at Royds Withy King