The pitfalls of using AI-based technology in the workplace

1 Sep 2020 By David Lorimer

A recent Court of Appeal judgment highlights the legal risk of implementing facial recognition software. David Lorimer explains what employers should consider

It is often the case that high-profile new regulation, legal challenges or reputation-threatening headlines prompt employers to revisit their tech transformation strategy, but that need not be the case. Solutions that use automatic facial recognition (AFR), robotic process automation (RPA), artificial intelligence (AI) or other forms of machine learning can come with reputational and legal risks, but can also help businesses to be more efficient and competitive.

What did the Court of Appeal find?

In Bridges v South Wales Police, Ed Bridges challenged South Wales Police’s use of live facial recognition in public. The Court of Appeal found that the South Wales Police had failed to comply with its obligations in respect of privacy, data protection and anti-discrimination law. In particular, it failed to properly consider both the potentially discriminatory biases built in to many AFR platforms and the significant invasion into the private lives of those monitored that the rollout of the solution represented.

What does this mean for employers?

Many businesses will already use, or may plan to roll out, solutions that use AFR, RPA and AI. For instance, many employers rely on recruitment tech that records candidates' answers to particular questions and automatically scans the recording for keywords, tone and facial expressions, using them to mark the candidates. It has been reported that, in many cases, such platforms are less reliable at identifying expressions for black, Asian and minority ethnic (BAME) and female candidates – which will rightly be of significant concern to diligent employers.

As the same potential biases were discussed at length by the Court of Appeal, and formed the basis for its findings, those platforms and solutions will be under the microscope.

What should businesses do?

The major lesson from this case is that employers must not bury their head in the sand. They will be treated as being 'on notice' of the particular challenges of these types of platforms, and must ensure they fully consider and navigate those before implementing them. As a minimum, companies must:

  • seek to fully understand the potential biases in the solution. Employers must work with their vendors to try to get under the surface, to understand if there are biases based on rigorous testing – ideally carried out by independent third parties (eg the National Institute of Standards and Technology or the International Organization for Standardization) and, if so, how potential biases might impact on people of a particular race, nationality, sex or religion, for example, in practice;  
  • carefully consider what aim they are pursuing, and whether that could be achieved by taking a different approach with no such discrimination risks; 
  • seek to ensure that any risks identified are eliminated through the way in which any solution is implemented;  
  • build in mitigations and 'fail safes' to guard against any identified risks; for instance, ensuring human review of any decisions made (and the data on which they were based);
  • document those considerations in an equality impact assessment. Although these are only mandated in respect of organisations carrying out public functions, it is advisable for private organisations to make use of this tool, to ensure and demonstrate that they are taking the risk of discrimination seriously; and 
  • not forget the privacy and data protection risks too. Minimal steps here include identifying lawful bases for the processing of special category or sensitive personal data, carrying out a data protection impact assessment to weigh the privacy risks against the potential benefits and to consider the proportionality of the proposed solution, and navigating the GDPR's additional hurdles when tools making automatic decisions are used.

Employers are by no means prevented from harnessing innovative new solutions, but they are required to show that they have carefully considered and applied the important safeguards laid down in anti-discrimination and privacy law. 

David Lorimer is director of employment, pensions, immigration and compliance at Fieldfisher

Associate Director of HR  - Organisational Development

Associate Director of HR - Organisational Development

London (Central), London (Greater)

£65,000 - £70,000

Brunel University

Resourcing Coordinator

Resourcing Coordinator

Manchester, England

£24000.00 - £27000.00 per annum


HR & OD Business Partner

HR & OD Business Partner

Newton Abbot, England

£19.23 - £20.25 per hour


View More Jobs

Explore related articles