Legal

The legal risks of automated decision-making

30 Sep 2020 By Ed Hayes and Sarah Wall

A recent case against Uber could have major implications for businesses that use algorithms to make decisions about their employees, as Ed Hayes and Sarah Wall explain

Two UK drivers have started legal proceedings to access their personal data and understand how Uber makes automated decisions that affect them. These decisions, the drivers argue, are based on their use of the app, location, driving behaviour and communication with customers and Uber’s support team. The drivers believe these decisions affect job allocation and pay rates.

They claim that Uber has failed to comply with its obligations under the General Data Protection Regulation (GDPR) by failing to provide full access to the drivers’ personal data, and failing to provide complete information about its automated decision-making.

Following the recent controversy over the allegedly discriminatory algorithm used for A-level results in England, organisations should be clear what their obligations are when using personal data in this way and the key takeaways from this case.

Your obligations

Under the GDPR, data subjects have the right to be informed of:

  • the existence of automated decision-making;
  • meaningful information about the logic involved; and
  • the envisaged consequences for the data subject.

The GDPR gives individuals the right not to be subjected to a decision based solely on automated processing (ie, with no human involvement) that produces legal effects concerning or significantly affecting them.

There are circumstances where this right does not apply; for example, where the individual explicitly consents to the automated decision-making.

In these circumstances, employers must implement suitable measures to safeguard the individual’s rights and freedoms and legitimate interests. The individual should also have the right to obtain human intervention, to express their point of view and to contest any automated decision.

Key takeaways for employers

1. Have you assessed the risk of using automated decision-making?

A data protection impact assessment (DPIA) is almost certainly required if you intend to process personal data using AI systems. You must carry out a DPIA before the processing starts, identify the level of risk involved and put in place suitable mitigations. It’s also important to review the DPIA regularly, particularly when there is any change to processing.

2. What are you telling your employees?

Transparency and accountability are overarching principles of the GDPR. You should ensure your privacy policy is clear in relation to automated decision-making. It should contain all the information required under the GDPR. The Information Commissioner’s Office says “meaningful information about the logic” should not be a confusing explanation of the algorithm. It should simply describe the type of information collected and why this information is relevant. 

3. What happens if a decision or process is challenged?

Organisations should be aware of the risks in using automated decision-making (for example, it may lead to discrimination) and should adopt measures to safeguard against them.

If an employee is not happy with the process used by their employer, they will have the right to ask for human intervention and to contest the decision. Businesses should have a process in place that enables employees to do this.

Ed Hayes is legal director and Sarah Wall a trainee solicitor at TLT

Associate Director of HR  - Organisational Development

Associate Director of HR - Organisational Development

London (Central), London (Greater)

£65,000 - £70,000

Brunel University

Resourcing Coordinator

Resourcing Coordinator

Manchester, England

£24000.00 - £27000.00 per annum

Hays

HR & OD Business Partner

HR & OD Business Partner

Newton Abbot, England

£19.23 - £20.25 per hour

Hays

View More Jobs

Explore related articles