An animal charity in San Francisco has become a target for much global criticism and local abuse, all because of the behaviour of its most recent recruit: an R2-D2-style robot.
The 1.5m-tall Knightscope security robot had been trundling around neighbouring car parks and alleyways recording video and stopping to say hello to passers by – but ended up being accused of harassing homeless people. A social media storm led to calls for acts of retribution, violence and vandalism against the charity. The Knightscope itself was regularly tipped over, once covered in a tarpaulin, smeared with barbecue sauce and even covered in faeces.
Problems with electronic workers are becoming more common. A similar robot in another US city inadvertently knocked over a toddler; others have been tipped over by disgruntled office workers. It’s an example of how robot technologies can provoke extreme, perhaps irrational, reactions. Security was a real issue for the charity – there had been a string of break-ins, incidents of vandalism and evidence of hard drug use that were making staff and visitors feel unsafe, and it seems reasonable the charity should try to rectify this.
The introduction of new forms of artificial Intelligence (AI) into people’s everyday working lives is one of the biggest challenges facing not just employers, but entire societies. How far are we willing to – or should we at all – let robots into the workplace? What kinds of roles are acceptable and which are not? Robots are already capable of providing care to children and the elderly, working in hospitals, performing surgery and delivering customer care, as well as a whole range of office-based analytical roles.
Most importantly, who sets the rules for how they behave, and how they decide on priorities when interacting with people? Fundamental issues like prioritising all business tasks and objectives over scruples, over the opinions and feelings of human co-workers. After all, the behaviour and the ‘right’ choices are all made in the programming.
The potential of robot-enhanced living and the huge commercial opportunities involved mean we will become more accepting of robots as they become a familiar – even inescapable – part of our lives. But that central issue of what kinds of robots we want and where must be dealt with now. The debate needs to be shaped as much by ordinary members of the public, employees and their managers, as by technologists and engineers. We all have a stake in deciding what makes a ‘good’ robot.
The British Standards Institute (BSI) published the first standard for robot ethics in 2016 – the BS 8611. But that’s just the start. As part of its work with the BSI’s UK Robot Ethics Group, Cranfield University wants the views of the public on the future of robots in our lives to develop new standards and inform the work of developers and manufacturers.
Our relationship with AI and robots is messy and confused. On the one hand, there are attacks on robots when there is a feeling of intrusion. On the other, we’re increasingly emotionally attached to our personal technologies, to our smartphones and tablets, and we make pets of robot toys and anything that shows signs of engagement, no matter how limited and fake. There’s the potential for too much trust.
We need to be clear-sighted about the future of human-robot relationships in the workplace, and that means debate now before the sheer scale of consumer opportunities and cost-savings make the decisions for us.
Dr Sarah Fletcher is a senior research fellow at the Centre for Structures, Assembly and Intelligent Automation at Cranfield University