Legal

The data protection implications of using AI in recruitment

14 Sep 2018 By Richard Brown

What should HR professionals be aware of when using artificial intelligence during the recruitment process? Richard Brown reports

Before we all point out that the ‘H’ in HR stands for ‘human’ and that this might seem to be an area unsuitable for machines, when you dig a little deeper, you can see how automated decision-making has the potential to not only create efficiencies, but also to enhance the recruitment process.

Impact on bias – the positives

Automated decision-making has the ability to be objective and remove unconscious bias in recruitment. In recent years, as knowledge and understanding of overt discriminatory practices has improved, the focus has shifted to eradicating unconscious bias in the minds of recruiters.

Unconscious bias is where we make decisions using the shortcuts in our brain to identify with people who are like us or share our values. Unconscious thoughts can involve making negative decisions by applying stereotypical views and attitudes that affect our understanding, actions and attitudes in an unconscious manner. For example, assuming that a parent may not wish to travel far to work and discard their application on that basis.

AI models at the recruitment stage can be used to produce decision-making free from such bias. For large employers looking to make efficiencies, the development of this type of talent acquisition software can assist by scanning, reading and evaluating a large number of applications quickly.

Potential side effects of input bias

Like any emerging technology, unintended consequences may arise. One of these is the scope for input bias to creep into an AI system. A recent parliament select committee report on artificial intelligence considered the possibility that data input in AI could be subject to bias as well as the scope for the algorithms to produce biased decisions. The report refers to AI used in the American criminal justice system to assess risk in sentencing, explaining that this system “commonly overestimated the recidivism risk of black defendants and underestimated that of white defendants”. Employers therefore need to be live to these issues, ask questions and carry out appropriate diligence before introducing AI into their operations.

Data protection issues

Decisions made using automated decision-making have a variety of data protection implications, which have been amplified by the introduction of the General Data Protection Regulation (GDPR). Some of the key issues employers should consider before adopting AI in their processes are:

  • At the start of any AI project, employers should consider whether a data protection impact assessment (DPIA) is required. Employers must carry out a DPIA where a type of processing is likely to result in a high risk to the rights and freedoms of individuals. A DPIA involves the identification of privacy risks and a consideration of what is necessary and proportionate. If algorithms are used, there should be transparency about how this is applied in order to demonstrate accountability;
  • If decisions are made about an individual at an automated level, this should be made clear in the data privacy notice issued to candidates. The transparency principle inherent in GDPR requires that individuals have the right to know how their personal data is processed;
  • GDPR also sets out additional protections where a decision is based solely on automated processes which have a legal or similarly significant impact on a data subject. This is likely to include a recruitment decision without any human input and the recitals to GDPR that deal with automated decision-making explicitly refer to e-recruiting practices. Individuals have the right for such decisions not to be taken solely based on automated means, unless this is authorised by law, necessary for a contract, or where explicit consent was given. Even then, except where it is authorised by law, specific safeguards must be in place such as a mechanism for the individual to challenge the decision and to obtain human intervention. The lawful basis for processing personal data in this way needs to be considered and identified before proceeding with automated profiling.

None of these steps should stop employers embarking on the introduction of AI. In the new GDPR world we live in, we should all be including privacy in workplace systems by design. Automated decision-making simply adds an additional GDPR layer to the mix.

Richard Brown is a partner at CMS

Head of People & Organisational Development

Head of People & Organisational Development

London (Greater)

Attractive Package  

Institute of Physics

HR/Training Manager

HR/Training Manager

Newcastle Upon Tyne, Tyne and Wear

Children North East Salary Scale Points 36 to 38 - (£32,491 - £34,379)

Children North East

HR Shared Services Administrator

HR Shared Services Administrator

Birmingham, England

Negotiable

Hays

View More Jobs

Explore related articles