What type of person would a robot hire?

AI can take the legwork out of recruitment – but with debates over algorithm bias and the risk of human redundancies still raging, HR needs to tread carefully

What type of person would a robot hire?

On paper, artificial intelligence-powered recruitment sounds ideal. Computer programmes that can take all the hard work out of hiring and write job descriptions, source suitable candidates from pools of millions, screen out unsuitable applicants and even assess their personality and attributes via video interview? It’s little wonder the technology has already proved popular with thousands of organisations.

And yet, the concept on paper is very different to reality – and far from the perfect, technology-driven haven that was intended. As well as widely reported issues with bias among some AI recruitment tools (for example, in 2018, Amazon was forced to scrap an AI hiring programme it had been testing after the algorithm, which was based on 10 years’ worth of application data predominantly from men, taught itself to screen out female candidates), it has also rightly raised widespread concern about roles being at risk of being replaced altogether by algorithms as the technology gathers pace. So with this in mind, do the time and money-saving advantages of automation actually come at too high a cost when weighed up with the potential risks – logistical, legal and reputational – of AI getting it wrong? 

Above all, use of AI in the recruitment process needs to be relevant to the role, fair and inclusive to all candidates, says Hayfa Mohdzaini, senior research adviser at the CIPD. That means implementing it with thought and care to ensure the applicant, not just the organisation, is considered – and that it aligns with an employer’s values and brand. “AI needs to be tested before rollout and routinely afterwards to ensure it’s not introducing bias or disadvantage into the process and is easily accessible for all candidates,” says Mohdzaini. “Organisations should also be evaluating the use of AI by asking candidates and hiring managers for feedback and responding accordingly. They should also monitor the impact AI has on the diversity of its candidates.”

While it’s currently the responsibility of employers to monitor the impact of AI on candidates, there are growing calls for more oversight and governance in this space. The All-Party Parliamentary Group on the Future of Work is calling for employers to pre-emptively consider the impact of using automated decision-making technologies on workers from as early as the design stage. The government is also exploring six priority areas for developing an effective AI assurance system, 
which includes introducing regulation and mandatory third-party audits of AI tools.

One significant aspect of the rhetoric around AI is its perceived ability to help organisations make more objective and transparent decisions. However, according to Dimitra Petrakaki, professor of technology and organisation at the University of Sussex Business School and member of the Digital Futures at Work Research Centre (Digit), this is largely based on the view that replacing humans for technology also means that bias, emotions and subjectivity are removed. “There is evidence that suggests this is not the case,” he says. “AI technologies remain political technologies by their design. This means as humans develop AI, their assumptions and stereotypes are transferred and become reflected in the AI. 

For example, AI may discriminate against candidates on the basis of their postcode or accent, reflecting assumptions about candidates’ socioeconomic backgrounds. Similarly, the ability of artificial intelligence to interpret information on the basis of patterns found in existing data sets also indicates that AI is prone to produce bias.” 

As long as humans are behind their design, says Petrakaki, it will always be challenging to create watertight AI systems for hiring. Yet there are ways of critically reviewing the decisions they make post hoc, and challenging them as and when needed to ensure bias does not creep into the process. Among the key questions to address are whether the algorithm makes decisions on the basis of candidates’ performance independently of their characteristics, and whether it can make hiring decisions that reflect principles of equity, or if their decisions are driven by performance standards that assume their general applicability. 

For larger employers with more significant recruitment pipelines, however, hiring automation has a compelling allure. Consumer goods giant Unilever, for example, receives more than two million applications in an average year. Given the high volume and associated time pressures – not to mention the added uncertainties in the era of a global pandemic – the company has found automated online testing can offer candidates a unified, simple, user-friendly journey with feedback in real time.

“Online testing has proven very successful as part of our selection processes for early career programmes,” says global head of employee experience Tom Dewaele. “The way in which we feed back to applicants varies based on the role, market and recruitment tools used. For example, candidates that complete our global gamification online assessment receive constructive feedback that is automated, but our recruitment team is always on hand to support candidates.” 

Similarly, betting and entertainment group Flutter uses Textio across two of its businesses to actively evaluate the language used in its job adverts and ensure they are gender neutral and unbiased. Chief people officer for group functions, Emy Rumble-Mettle, appreciates the value they bring to the company:. “Much of the language inherent in the talent space has been perceived as biased, so we wanted to rule that out as quickly as we could and create adverts that ignite interest for as many people as possible.” 

Nevertheless, she stresses that while AI will undoubtedly play a part in advancing the candidate experience at Flutter, it will act as an enhancement of the people experience rather than a replacement. “We wouldn’t want to lose our customer-centric focus by leaving some of the key decisions of our talent agenda to an AI tool,” she explains. “Our colleagues will always have a key part in how we apply more laser-focused emerging practices to our business without losing the human touch. AI is an enabler for our business, not a strategic lever.” 

Fears around job automation have also received substantial focus in recent years; PwC’s recent Hopes and Fears study found that three in five (60 per cent) respondents are concerned about the issue, with two in five (39 per cent) thinking their jobs could become obsolete within five years. And the worries are by no means unfounded; 2019 data from the Office for National Statistics found 7.4 per cent of 20 million jobs analysed – around 1.5 million in total – were at high risk of being automated in the future, including 58 per cent of administrative HR roles and 40 per cent of HR and industrial relations officer roles. However, this overlooks the huge opportunity for HR and recruitment professionals to upskill into new professional categories: domain experts capable of bringing together the right technology that reduces the administrative burden, a data-driven approach that enhances decision-making, and a human-led approach that produces real results for members of the workforce.

Rob McCargow, technology impact leader at PwC UK, issues a clarion call for people practitioners: “Now is the time for HR professionals to upskill themselves in how AI tools work, how to challenge vendors on their approach to mitigating risk, and how to implement effective governance that takes into account the particular features of these technologies.” 

The sentiment is echoed by organisational psychologist and author of I, Human Tomas Chamorro-Premuzic, who argues the only intelligent solution is to invest in reskilling and upskilling. Just like in any technological evolution or revolution before, technology and innovation will destroy certain jobs while creating others, he says, but the new jobs require new and improved skills from humans. We need to worry not about AI’s ability to replace humans in mundane and predictable tasks or jobs, he explains, but about whether humans can acquire the skills to deploy their creativity and imagination in new, more complex jobs that AI cannot do.

Chamorro-Premuzic advises that best practice should be based on a simple principle: humans augmented by AI is still better than one without the other. This means that AI should be used to take care of predictable, repetitive, and low-level tasks, leaving humans to provide the creativity, curiosity, and empathy that AI cannot provide. He suggests thinking of AI as a prediction engine: its accuracy and fairness depends on the quality of the training data, as well as the strength of the patterns in the data. 

“The problem of bias, at least in HR, is nothing new in the AI world,” he says. “It often derives from the problem of replicating human bias. Because the training data or observations that we use to help AI learn are themselves based on biased outcomes or data, we are transferring human bias onto AI, and then replicating it or augmenting it at scale.”

He gives the example of training self-driving cars, during which humans classify objects as traffic lights, trees, or pedestrians, so AI can quickly learn to detect traffic lights, trees and pedestrians. However, when AI is trained to identify high-potential employees or people who are a good cultural fit, the training data is based on human preferences. “In other words, we ask AI to learn which people are generally designated as high potential or a strong fit in a given system without realising that those people may not necessarily be great at their job – they are just popular or privileged,” Chamorro-Premuzic explains.

So, when chatbots are accused of sexism because they successfully predict that middle-aged white male engineers are more likely to get promoted, they are not being biased or sexist, they have just reverse engineered the algorithm that is currently used in that organisation. But fixing the problem, he says, entails not asking AI to predict biased preferences. Instead, it should be trained to predict actual value added, talent, and performance, which calls for careful measurement of such attributes. This, Chamorro-Premuzic emphasises, is not a task for AI, but one for humans. 

Duncan Harris, director of Gartner’s HR practice, points out that AI can be used even before the recruitment process begins. From labour market analysis to sourcing and marketing tools, AI may prove a vital resource before recruiters and HR leaders begin to establish relationships with candidates. 

“Companies might experience several problems in their recruitment process,” adds Harris. “They may have too many candidates for too few positions, too much hiring volume to process, difficulty in finding specialist candidates or low hiring volume. Therefore, HR leaders should invest in AI capabilities that are aligned to solving their key talent challenges.

Equally, Harris adds that it’s important HR determines which roles might benefit from hiring algorithms that predict performance and find ways to leverage AI to enable higher-quality, high-touch interactions between recruiters and candidates. “But they should keep in mind that many still believe AI is being used to automate the decision about who should be hired, so it’s important to communicate regularly and be transparent about how AI is really being used.”


How Kraft Heinz recruits using algorithms

“Both human-based and AI-based recruitment processes can have bias – but humans cannot be audited or reprogrammed,” says Pieter Schalkwijk, director of talent acquisition at Kraft Heinz International Zone. “When using AI in your recruitment process, it should be designed in such a way that it can be continuously audited and checked for bias.” 

Not only does the food and beverages giant audit its AI for bias, it also uses AI to analyse job descriptions in a further effort to reduce any unfairness, helping recruiters find language that is inclusive of all gender and age groups. 

As well as this, the company also uses a gamified assessment to evaluate a candidate’s cognitive and behavioural attributes. Before the AI is deployed, the algorithm is heavily checked and audited for any bias through an open-sourced algorithm auditing tool.

“We will likely slowly increase the use of AI in the recruitment process, always testing and learning in a way that will improve and de-bias our process,” says Schalkwijk. “But we will use the insights from the AI as extra data points, not to replace human interviews   from our process.”