Gaps in UK employment law over the use of artificial intelligence (AI) in HR decision making could lead to “widespread” discrimination in the workplace, unions have warned.
In a report, the TUC said the increased use of algorithms, machine learning and automation across HR processes could leave employees unable to understand or to challenge “life changing” employment decisions.
The report said AI was already making “high risk” decisions that directly impacted the lives of workers – including assessing candidates, monitoring performance, allocating shifts and in some cases making redundancy decisions – despite the well-known risk that human biases can often be programmed into these algorithms.
- How can AI assist the tribunal process?
- The man vs machine trope limits our ability to imagine the future of work
- The legal risks of automated decision-making
It also warned that the pandemic was exacerbating the process, as businesses look to improve the way they work remotely.
Frances O’Grady, general secretary of the TUC, said that without “fair rules” the use of AI could lead to “widespread discrimination and unfair treatment – especially for those in insecure work and the gig economy”.
“AI at work could be used to improve productivity and working lives. But it is already being used to make life-changing decisions about people at work – like who gets hired and fired,” she said.
Get more HR and employment law news like this delivered straight to your inbox every day – sign up to People Management’s PM Daily newsletter
“Every worker must have the right to have AI decisions reviewed by a human manager. And workplace AI must be harnessed for good – not to set punishing targets and rob workers of their dignity.”
The union is calling for legal reform for the ethical use of AI at work. This would include imposing a legal duty on employers to consult trade unions on the use of “high risk” forms of AI in the workplace, a legal right for workers to have a human review of decisions made by AI systems, and amendments to the UK General Data Protection Regulation (GDPR) and Equality Act to guard against discriminatory algorithms.
While it was understandable that candidates and employees had concerns over the use of AI in key HR decisions, especially those about hiring and firing, David Lorimer, director at law firm Fieldfisher, said workers already had some protection.
“For instance, decisions that are tainted by discrimination, which can be the case where algorithms are trained on biased sets of data, are challengeable,” he said.
Employers can only make decisions without human intervention in limited circumstances, Lorimer added, and they must be transparent about this, under the UK GDPR.
“Certainly, there is more to do when it comes to baking ethical considerations into AI tools deployed in the workplace. My experience is that employers are increasingly live to this, and are taking steps to carefully consider these issues,” he said.
This was echoed by Kate Palmer, HR advice director at Peninsula, who said companies that failed to tackle issues of discrimination risked massive negative impacts on their business. As well as the risk of legal action and the financial risk that carries, allegations of discriminatory practices could also compromise an organisation’s reputation.
“They may find that attrition becomes out of control, coupled with an inability to recruit new staff who refuse to be employed by a business that does not put the fair treatment of its workforce as a top priority,” said Palmer.
Alan Price, chief executive of BrightHR, said there needed to be a shift in mindset to ensure AI wasn’t seen as a replacement for humans but as a new way humans and technology can work together.
“HR should consider how the message will be communicated to employees and be prepared to answer questions from a concerned workforce; a good change management culture will be key. Investment in upskilling current staff should also be on the agenda,” he said.