How will new AI legislation affect businesses?

With concerns growing over the use of artificial intelligence in the workplace, People Management explores the implications parliamentary proposals would have on work processes

Despite the hope that artificial intelligence (AI) might revolutionise the future of work, the general consensus so far seems to be that it has negative implications on the workforce. The TUC warned in March that workers could be “hired and fired by algorithm”, while a recent Harvard Business School study revealed that the majority (88 per cent) of employers believe qualified applicants are filtered out by the screening software.

Campaigners and policy makers are also questioning the impact that AI is having on the quality of work – whether it be algorithms that decide how much work app-based couriers receive, or automated performance monitoring pushing warehouse staff to forego toilet breaks in order to meet packing targets. The issue was summarised by a report from the Institute for the Future of Work, which said: “We find that it is not the replacement of humans by machines but the treatment of humans as machines” that defined the current era of work.

Last week, a group of MPs decried the “growing body of evidence” pointing towards a “significant negative impact on the conditions and quality of work across the country” caused by the use of algorithms in the workplace. In a report, the All-Party Parliamentary Group (APPG) on the Future of Work argued that the monitoring of workers and setting performance targets through algorithms was damaging employees’ mental health, and proposed new legislation to give workers more visibility about how their employers were using digital tools.

People Management has explored the contents of the APPG’s proposed Accountability for Algorithms Act (AAA), and what it might mean for employers.

What are the current risks of using algorithms in the workplace?

“All employers and all employees are affected by AI,” says Dr Neil McBride, reader in IT management and researcher at the Centre for Computing and Social Responsibility at De Montfort University Leicester. And because AI depends on humans to provide rules and data for the algorithm to interpret, according to McBride, humans are accountable for the decisions made following an AI’s prediction.

“Decisions from algorithms should not be accepted blindly, but used as one factor in humans applying wisdom to decision making,” he adds. If an algorithm predicts an employee is planning to leave the company or trying to get pregnant, for example, and this information is used to make decisions on training or promotions, it could lead to claims of discrimination.

Alan Lewis, partner at Constantine Law, adds that the effects on mental health of some algorithms that have “real-time, micro-management and automated assessment” could lead to staff having grounds for disability status under the Equality Act, particularly if an employee is disabled. “The use of algorithms is a ‘provision, criteria or practice’ (PCP) under the Equality Act and the employer is under a duty to take reasonable steps so that the PCP does not put the disabled employee at a substantial disadvantage in comparison with non-disabled employees,” he explained.

What changes would the proposed legislation bring?

The APPG said its proposals aim to improve “clarity and fairness” around the use of AI in the workplace, arguing that many employees currently feel an “absence of agency” and a lack of confidence over how data is used and how AI makes decisions about performance.

Two key pieces of the proposed legislation include a new requirement for employers to provide staff with a “full explanation” of how any algorithm they use works, and a requirement for firms to create algorithmic impact assessments to identify risks – for example restricting access to work for some groups – that rolling out such technology might have.

Under the new rules, workers would also have the opportunity to give feedback on how these tools should be used in the future.

How would the new laws affect how organisations operate?

As a general rule, Hayfa Mohdzaini, senior research adviser at the CIPD, says sharing how AI makes decisions at a high level is “good practice”, particularly when it is consequential to recruitment and promotions. But, she warns, sharing algorithms with employees was not without risk. “Employers [might] accidentally divulge confidential information or employees might not understand it and could misinterpret it,” she says.

Therefore, for firms looking to create a successful transition, employers need to make sure HR teams are involved from the start of the process. “Too often HR only gets involved at the end – the training stage – when their expertise should be used to manage employee wellbeing and workforce planning throughout any rollout," she explains.

Lewis advises that employers nervous about maintaining confidentiality when sharing information about algorithms they have developed would need to rely on non-disclosure agreements and contractual confidentiality clauses in employment contracts.

He also warns that such a rule could trigger more whistleblowing claims by workers if they feel there have been breaches by the organisation and there is a public interest to be protected.