Artificial intelligence (AI) is defined as the “theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”.
At its core, the use of AI in the workplace has the potential to improve efficiency, productivity and accuracy while empowering employees to carry out fewer menial and repetitive tasks in order to focus on the more engaging aspects of their roles which require emotional intelligence.
One particular profession that has seen a recent rise in the use of AI is human resources; specifically, in the area of recruitment. In an increasingly competitive hiring market, recruitment professionals spend a considerable amount of time screening multiple CVs to identify suitable candidates. Often, a number of the applications are from candidates who do not have the requisite experience for the role.
An effective AI algorithm could, in the first instance, search through CVs to identify a pool of suitable candidates, reducing the amount of time spent doing so by recruiters. AI can enable employers to carry out searches using certain parameters such as job title, industry or education. Some AI platforms even claim to review a potential candidate’s online presence in order to determine suitability.
In theory, the use of AI in the recruitment process should help eliminate the very real concerns of racial and gender bias, particularly unconscious bias, and ultimately achieve more equality and diversity in the workplace. Yet the use of AI in the recruitment process has not been an entirely positive experience for those organisations that are already using it.
In October 2018, according to a Reuters report, Amazon’s use of AI as a recruitment tool was found to have been assessing candidates in a way that showed bias against women. Amazon’s AI system had taught itself to downgrade applications that included the word ‘women’s’ as well as applications by individuals who had attended certain all-women’s colleges. In an attempt to remove this obvious gender bias, Amazon edited the AI platform but ultimately lost belief in the system and abandoned the project.
Worryingly, at its earliest phase, an AI platform may be found to have built-in biases. For example, a machine that has been taught using more photos of light-skinned people than photos of dark-skinned people may result in the machine being less effective in recognising darker-skinned faces. Arguably, AI is only as unbiased as the people who taught it and the data it is being taught with, which, if historic, can be unrepresentative of reality. Bad data can lead to a myriad of racial, gender or other biases. This may lead to unsuccessful candidates bringing discrimination claims and companies consequently facing financial and reputational damage as a result of such claims.
Users of AI in the context of recruitment must also be aware of the enhanced protection that the General Data Protection Regulation (GDPR) gives people who are subject to automated decision-making processes, including profiling, which have a legal, or similarly significant, effect on them.
In spite of the potential pitfalls, employers shouldn’t write off the use of AI completely. Its very essence means it can be taught how to learn and interpret specific data. It follows therefore that there are certain ways, if they are approached correctly, in which bias in AI can be eliminated. However, achieving this will no doubt require human intelligence – the very thing that AI seeks to replace.
Camilla Beamish is a senior associate in the employment team and Kathryn Rogers a partner in the technology team, both at Cripps Pemberton Greenish