AI is all around us, and nowhere more so than the HR department. AI-driven recruitment tools, for example, are becoming common. They ‘learn’ the selection criteria and CV scoring of human recruiters and then apply those same principles across a wider and deeper pool of talent than any human team could hope to cover.
The AI systems also aim to eliminate any unconscious bias humans might have (for example, in relation to school background, nationality or gender). However, while these tools have a place, they also create a new barrier: if unconscious bias is present in the source material (i.e. the human decisions that the AI ‘learns’ from), the technology risks simply applying that bias more efficiently and effectively than any human could. The AI cannot make a conscious effort to overcome those biases in the same way a human would. Microsoft’s ‘Tay’ bot showcased this problem; when Twitter users gave it racist views, the bot later embedded and mimicked those prejudices.
In the employment law sphere, AI-driven document review platforms can be used to sort through documents and respond to employee data subject access requests, disclosure orders and other demands for ‘needle in haystack’ requests. Instead of teams of paralegals or junior HR advisers, AI software ‘watches’ an experienced reviewer evaluate a sample set of documents. The software then learns the human decision-making process by itself and applies it across thousands of documents per minute.
The accuracy levels are astounding, with a recent study showing an AI contract review platform achieving a higher accuracy rate compared to experienced lawyers. This reduces cost and helps HR teams get to grips with large amounts of documents more quickly.
Another example of AI in today’s workplace is performance management software. This measures productivity on a granular basis and learns what to do with the information – for example, comparing warehouse workers and picking rates. These figures are then crunched to recommend remuneration, benefits and disciplinary decisions.
We are not quite at that stage where those decisions are automatically implemented, with human review and check off usually required. The General Data Protection Regulation (GDPR) has a hand in this, as employees have the right to object to automatically-made decisions, and time savings can be killed if every decision is challenged.
But many HR departments are now introducing predictive analytic software to improve safety performance. This is achieved through employees collecting observation data using mobile apps to upload photos and report hazards. AI technology can use this information to reduce recordable incidents by learning through indicators which hazards have higher incidents. In this way, AI can anticipate which interventions in the company’s safety initiatives have the greatest impact and provide useful information about safety records.
These are just some of the AI applications used by businesses today and undoubtedly AI has a future in the HR department. Nonetheless, issues remain. Although falling, the cost of implementation remains high for most medium-sized businesses. And as these novel technologies roll out, teething problems mean the experience is far from seamless for most adopters. While these issues will be solved as the novelty of the technology erodes, for now decision makers still need to judge the cost and time to implement these complex solutions against potential benefits. Just because you can doesn’t always mean you should.
Raoul Parekh is a partner and Dónall Breen an associate at GQ|Littler