Comment

How to embrace AI recruitment and avoid bias

30 Sep 2021 By Khyati Sundaram

Employers must be careful when implementing new technology as often it will be trained using data that is cheap, readily available and full of prejudice, says Khyati Sundaram

Artificial intelligence is creeping into almost every aspect of our lives, job hunting included. Whether through LinkedIn, Facebook, or younger companies like Modern Hire, AI is helping organisations of all sizes connect with candidates and sift through applications at lightning speed. 

With job vacancies mounting and the market growing ever more competitive, it might feel tempting to turn to digital solutions to expedite your next hire. But it’s important to remember that AI isn’t a panacea for your recruitment needs.

AI can be an incredibly useful tool. Theoretically, it’s capable of learning what works and matching the ‘right’ candidate with the ‘right’ role. But just as humans can be guilty of unconscious bias, AI models are far from immune to perpetuating the same prejudices. 

LinkedIn’s AI was famously caught out for recommending more men than women for jobs. It learned that men tended to engage with more recruiters, apply to more jobs, and list more skills on their CVs than women; and interpreted these behaviours as indicators of performance. What it failed to take into account was the fact that men will apply to jobs when they meet only 60 per cent of criteria, while women tend to only apply when they meet closer to 100 per cent of them.

Facebook has also come under fire recently after an AI model it used to advertise jobs was accused of sexism. Men accounted for 96 per cent of Facebook users targeted for roles in mechanics, whereas women made up 95 per cent of those shown nursery nurse adverts. This time the disparity was driven by the use of AI to encourage as many clicks as possible, which unthinkingly reinforced damaging gender stereotypes. 

To avoid these pitfalls, it’s important that we interrogate the platforms we’re using to modernise our hiring processes. Before onboarding any new AI-powered system, dig into how the model has been trained and the data it’s learned from to mitigate the risk of doing more harm than good. 

Many AI models will be trained on data that is cheap, convenient and readily available such as successful CVs and the job applications of existing high performers. This information is easy to access, but since we know that organisations are already failing to hire women of colour, working class candidates and other minorities, training an algorithm on the existing picture of success means setting yourself up for perpetual monocultures and a lack of diversity. 

Instead, we need to change the status quo and use AI as a tool to drive de-biased hiring, rather than a one-size-fits-all hiring solution. For example, AI can be used to assess job adverts for gender-coded language to ensure the way you’re advertising roles isn’t excluding certain groups. 

Research shows that when ‘masculine-coded’ language such as ‘individual’, ‘driven’ and ‘challenging’ is used in job descriptions, the number of female applicants reduces by up to 10 per cent. Training AI models to detect gender-coded language in order to make adverts more inclusive can help employers debias their hiring process, by attracting a larger and more diverse candidate pool.

Where AI is being used to find and filter candidates, hiring managers mustn’t blindly follow what it suggests. They must also actively source talent from a range of platforms, locations, backgrounds and communities. Where AI is brought in to help, proxies like names, academic background and number of years’ experience must be removed as these are not predictive hallmarks of success.

Instead, AI should be leveraged to focus on the most accurate and objective predictors of performance that we have: skills. Based on job descriptions, AI can predict which skills are needed for candidates to succeed in particular roles, enabling employers to test for these competencies during the selection process. 

Overall, and most crucially, any AI model or algorithm must be trained on ethical, de-biased datasets that have been cleaned for any identifying information or data points that could induce discrimination or bias. Current data is biased because our current hiring processes are biased – we can’t feed biased data into a tech-powered recruitment system and hope for different outcomes. This is why many AI implementations are flawed, and why we’re yet to implement an AI-based hiring model at my organisation. 

Rather than letting AI make hiring decisions using historical data, we must focus on actively creating clean, fair datasets which enable humans to make better, more equitable decisions. This will pave the way for AI models to make more accurate and ethical sourcing and hiring decisions in future.

It’s crucial that we proceed with caution when it comes to implementing new solutions lest we risk making existing problems more entrenched. However, if approached ethically and leveraged effectively, emerging technology could unlock a prejudice-free future for recruitment. 

Khyati Sundaram is an expert on AI-recruitment and CEO of Applied

HR Assistant

HR Assistant

Manchester, Greater Manchester

£24,087 pa rising to £28,429 pa after four years' service.

Union of Shop, Distributive and Allied Workers (Usdaw)

HR Business Partner

HR Business Partner

Flexible location with meetings in Shelter London and Sheffield offices

£37,999 per annum (plus £4,689 LWA, if candidate is working from London office) + excellent benefits

Shelter.

Employee Relations Case Manager

Employee Relations Case Manager

Remote

£32,758 pa (plus £4,689 LWA, if candidate is working from London office) pro rata

Shelter.

View More Jobs

Explore related articles