Should people management policies use AI?

Technology with sophisticated algorithms is becoming a popular tool in HR to make employment decisions, but it does not eliminate biases, warns Anna Ginès

Should people management policies use AI?

The use of artificial intelligence (AI) in people management is becoming increasingly popular – sophisticated algorithms that use machine learning systems to make automated decisions. AI technology has clear advantages from a business point of view: decision-making processes become fast and efficient; by applying mathematics, decision-making becomes objective. Or do they?

 Unfortunately, AI and mathematical algorithms do not eliminate biases per se. In fact, without proper action being taken, they can reproduce and magnify biases while endangering the workers’ rights. But how does this happen? And how can we prevent it from happening?

Firstly, when collecting data to design algorithms in people management policies, we run the risk of infringing on the right to privacy. For example, it is increasingly common in recruitment processes to use social media tracking systems or facial recognition systems during interviews. These tools, which can be useful to obtain information regarding the candidate, can also give access to data that belong to the private sphere of the individual, such as their political view or sexual orientation, or the state of mind they are in. 

This information can not only be obtained through first-hand data, but also by inferring them from other data that are principal objective, such as photographs, events, or the web pages they visited. On the other hand, many workers are unaware of what information is being collected about them and what profile is being created, which conflicts with data protection.

In terms of equality and non-discrimination, many algorithms are created without due representation of minority groups or traditionally discriminated – such as women, people from ethnic minority backgrounds – which leads to erroneous prediction or decisions. 

For example, a company may design an algorithm that discards CVs with spelling mistakes. This, in principle, is an objective parameter, but it could discriminate against people on the basis of their origin – it would automatically rule out people with different mother tongues. Perhaps in some jobs, such as journalism or teaching, it is an essential requirement but, in many others, it could be a discriminatory process. Therefore, no matter how reasonable and objective the requirements may seem when designing an algorithm, the possible side effects should always be remembered.

Finally, many companies use ‘off the shelf’ algorithms, which have been designed by a third-party and are already prepared and trained. There is a risk that a company may hide behind such a ‘black box’ to justify an unfair decision, evading its responsibility.

There are a number of measures, in addition to purely technical solutions, that can be taken to help develop better algorithms and protect workers’ rights. These include:

  • Promote the participation of minority groups in both data collection and in the design of algorithms. In addition, it may be time to address the debate on quotas in algorithms head on.
  • Legally require all algorithms to be auditable, so that they can be subject to regular evaluations that analyse the effects they generate.
  • Ensure transparency of algorithms and give workers access to their data, including data that they have been inferred from them.
  • Give workers’ representatives access to information not only on the variables used to make employment decisions, but also the impact these decisions have – a kind of internal audit

In short, it is not a question of rejecting technology, but of using the full potential of innovation in a positive sense. Companies need to be able to rely on effective algorithms to help them make the best decisions, and this can only happen if the right data systems are in place.

Anna Ginès is a professor at Esade Business & Law School