Employers need diverse AI teams to guard against unethical use of technology

CBI says sexism and racism are a real danger – and HR needs to be involved in AI decision making

HR departments need to closely monitor the implementation of AI to make sure it does not entrench existing barriers to disadvantaged groups, a leading business body has warned.

The Confederation of British Industry (CBI) said businesses needed diverse teams in place developing the use of AI and automation to ensure the technology did not “entrenching existing unfairness and barriers”.

In a new report, AI: Ethics Into Practice, the CBI also called on employers to ensure the data used to develop AI did not contain historic prejudices against particular groups, and said diverse teams were “more likely to spot problems in data and challenge assumptions that could lead to unfair bias being programmed into AI”.

Felicity Burch, director of digital and innovation at the CBI, said businesses needed to think about the ethical issues around AI in the same way they dealt with other businesses issues. 

“At the end of the day, meaningful ethics is similar to issues organisations already think about: effective governance, employee empowerment and customer engagement,” she said.

“When it comes to AI, businesses who prioritise fairness and inclusion are more likely to create algorithms that make better decisions, giving them the edge.”

Edward Houghton, head of research and thought leadership at the CIPD, said that HR was often left out of the conversation when it came to the strategy, operation, design and implementation of AI. “[HR] is the least engaged and least consulted department when it comes to AI,” he said 

“It’s a big risk for organisations if they aren’t engaging HR up front, as there are going to be some real challenges around ensuring AI and automation in particular is being implemented inclusively and not to the detriment of employees,” said Houghton. 

However, Hephzi Pemberton, chief executive and founder Equality Group, warned there was also a risk that broader diversity and inclusion issues could become “stuck in the HR department”. 

“That isn’t where it belongs,” said Pemberton. “If diversity and inclusion is just stuck as an HR topic then it doesn't flow through the organisation, and you don't see people making changes and making the decisions that happen when you’ve got it set as a strategic priority from the top.”

Pemberton echoed the sentiments of the CBI report and said: “More diversity creates more innovation, creativity and resilience while removing levels of bias that you’ll see from homogenous teams.”

But, she added, businesses already faced challenges when trying to build diverse teams with a data and AI skillset, and the issue needed to be considered earlier on in the talent pipeline for these jobs.

“The pipeline is fairly constrained with the diversity you can get within data and AI teams,” she said. “[Diversity] needs to happen earlier at a government and educational level so we have a diverse set of students studying these subjects,” said Pemberton

“I think businesses will have to work hard to form and recruit those diverse teams into more technical coding roles.”

Houghton also suggested that the UN’s guiding principles on human rights were a good place for employers to start when thinking about the ethics of AI in the workplace. 

“It's really important that when we think of AI and ethics that we adhere to basic human rights for individuals,” he said. “That’s the basis of how we should implement new technology. Employers need to go to the framework that guides their decision making and ensure employee voice is considered within the process.”