Any employer considering using generative artificial intelligence (AI) for HR purposes will need to navigate the many pitfalls it comes with, including an inability to distinguish between the reliable and unreliable content it sources and the fact that the output is only as good as the input from the instructing human.
AI can also be manipulated and contains biases as a result of the original source material it has been trained on. A good example of this is when Amazon discovered that the AI algorithm it had created to screen candidates to interview was subconsciously biased and repeatedly selected male CVs over female CVs – replicating the bias in the historical recruitment practices that had trained the AI.
There is also the minefield of data protection and privacy to circumvent in light of the data that AI processes to function, and the monitoring and surveillance algorithms it employs. All data subjects, including employees and prospective employees, have the right under data protection legislation to be informed of automated decision making and, subject to exceptions, not to be subject to decisions based solely on automated data processing.
However, even at its young stage of development, AI can already perform an astonishing range of functions such as writing job descriptions, filtering CVs, summarising complex grievances and reviewing policies, to name a few.
Can AI replace people?
The question remains, can AI develop to such an extent that it is able to provide a realistic alternative to people in HR teams? Can AI ever possess the same level of skill, empathy, sensitivity, creativity and interpersonal skills needed in people-centric roles? This seems unlikely. But it does already seem inevitable that certain aspects of a traditional HR role will be increasingly resourced by AI. This will provide cost efficiencies and increase productivity, provided certain checks are in place. Using ChatGPT and other forms of AI may save overstretched HR managers time.
As well as considering how AI can be successfully implemented within the HR team, HR may also be required to proactively review whether other roles within their organisation could be supported by AI or potentially replaced by it. Employers will need to immerse themselves in this new wave of technological advancement to stay ahead. There will also be significant opportunities for those that adjust quickly to reap the benefits of the capabilities of AI, but in equal measure there could be significant consequences for jobs.
The threat to jobs posed by AI is not a worry unique to HR. There is growing demand to legislate in this area. Current employment law has not been designed to protect workers against the intellect of AI but arguably provides some limited protection. For example, an employee who is made redundant and replaced by AI may be able to assert that their dismissal was not within the range of reasonable responses and unfair. If AI is used to select candidates for interview but, on account of its inherent biases, discriminates in this selection process, the Equality Act 2010 could afford some legal recourse. However, this is new ground and currently untested and, while employment judges are adept at making decisions in cases that push the boundaries of the existing law, the government will need to legislate in the long term.
The TUC has criticised the UK for falling short in preparing to protect workers from AI since some European countries are taking more interventionist approaches. A Government white paper – AI regulation: a pro-innovation approach – published in March 2023 sets out proposals for using existing regulations and regulators with a view to providing protection without stifling progress. This proposal was widely criticised, and various bodies have set out counter proposals advocating for more regulation.
Possible options for more interventionist reform are considered in a research briefing, Artificial intelligence and employment law, recently published by the House of Commons. This includes consideration of the Artificial Intelligence (Regulation and Workers’ Rights) Bill 2022-23 that was put forward as a private member’s bill by a Labour MP and proposes to strengthen workers’ rights against the potentially damaging ramifications of the increased use of AI in the workplace.
More recently, the UK hosted an international AI safety summit with indications that the government is changing tack, but despite this there was still no mention of plans for new legislation to regulate AI in the king’s speech. Inevitably, there will be more debate on this topic. In the meantime, until the law catches up, developing policies and educating staff on the appropriate use of AI is advisable.
A future world where a tricky conversation with an employee takes place with some form of AI human resource seems quite far fetched right now, but not so long ago an AI-generated actor replacing a real actor was also implausible. As the uptake in using AI in the workplace becomes more pronounced, legislating to protect the humans in HR and other roles will inevitably eventually become the answer.
Charlotte Sloan is legal director in the employment team at Birketts