Automation and AI technologies need to be developed with humans in mind, a leading mathematician told delegates at the CIPD Festival of Work.
Dr Hannah Fry, associate professor in mathematics of cities at University College London, said that when using data and algorithms to build tools that affect real people, “you cannot just create something, put it on a shelf and decide whether you think it’s good or bad in isolation.
“You absolutely have to think about how what you’re creating will fit into the world around it. You have to think about how humans will use it, and you have to think about the longer-term implications of what you’re doing,” she said.
- Festival of Work 2020: highlights from days two and three
- HR needs to embrace technological change, not shy away from it
- AI is creating ‘explosion’ in demand for tech skills, says report
Opening the third and final day of the online conference, Fry said there was often a rhetoric of ‘humans versus machines’, as though it might become possible one day to just swap one for the other in any given role. But, she said, the real challenge was to create algorithms that supported human decision-making instead of replacing it.
One controversial area where this was already being done was the USA’s judicial system, she reported, where in some cases algorithms that analyse the results of a questionnaire taken by defendants were used to help judges decide on sentencing. When asked, most people said they would rather have a human judge pass sentencing, said Fry. “[But] it turns out that actually there is an incredible amount of luck and bias in the judicial system,” she said.
She pointed to studies showing different judges often give different responses to the same case; that the same judge will give a different verdict to the same case on a different day; and that judges don’t like to give the same response too many times in a row. “So if three bail cases before you have been successful, your chances of a successful hearing drop off a cliff,” said Fry.
Get more HR and employment law news like this delivered straight to your inbox every day – sign up to People Management’s PM Daily newsletter
“My favourite is that judges tend to be a lot stricter in towns where the local sports team has lost recently, which I think demonstrates the kind of mess of luck and inconsistency that you’re really up against,” she said.
There was a place for algorithms to help improve consistency in situations like this, said Fry. But humans needed to overrule algorithms when they made mistakes, she added: “While the algorithms we have now are spectacularly good at a lot of things, they are not perfect. They don’t understand context, and they don’t understand nuance… And when you add to that the fact that humans are very inclined to take cognitive shortcuts whenever they can, that can end up being a bit of a recipe for disaster.”
Fry warned that when a process or a machine “gets dressed up as an algorithm or AI, it can end up taking on an air of authority that makes it hard to argue with”, giving the real-life example of a group of tourists unquestioningly following a sat nav and driving a car into the ocean.
“We are too trusting… we’re also selfish, we forget stuff, we like to take the easy way out. And when we think about automation, the way that we approach automation, it has to acknowledge that fact,” said Fry. “It has to acknowledge those human flaws.”