Do you think a robot should be allowed to lie? A new study published in Frontiers in Robotics and AI investigates what people think of robots that deceive their users.

By Stine S Johansen

Social norms say it can be okay for people to lie, if it protects someone from harm. Should a robot be allowed the same privilege to lie for the greater good? The answer, according to this study, is yes – in some cases.

This is important, because robots are no longer reserved for science fiction. Robots are already part of our daily lives. You can find them vacuum-cleaning your floors at home, serving you at restaurants, or giving your elderly family member companionship. In factories, robots are helping workers assemble cars.

Several companies, like Samsung and LG, are even developing robots that may soon be able to do more than just vacuuming. They could do your house chores or play your favourite song if you look sad.

The new study, led by cognition researcher Andres Rosero from George Mason University in the United States, looked at three ways robots might lie to people:

The robot could lie about something other than itself.

The robot could hide the fact it is able to do something.

The robot could pretend it is able to do something even though it is not.

Respondents were asked if the robot’s behaviour was deceptive, and whether or not they thought the behaviour was okay. The researchers also asked respondents if they thought the robot’s behaviour could be justified.

While all types of lies were recognised as deceptive, respondents still approved of some types of lies and disapproved of others. On average, people approved of type 1 lies, but not type 2 and type 3.

Just over half of respondents (58 per cent) thought a robot lying about something other than itself (type 1) is justified if it spares someone’s feelings or prevents harm.

This was the case in one of the stories involving a medical assistant robot that would lie to an elderly woman with Alzheimer’s about her husband still being alive.

On average, respondents didn’t approve of the other two types of lies, though. Here, the scenarios involved a housekeeping robot in an Airbnb rental and a factory robot co-worker.

In the rental scenario, the housekeeping robot hides the fact it records videos while doing chores around the house. Only 23.6 per cent of respondents justified the video recordings by arguing it could keep the house visitors safe or monitor the quality of the robot’s work.

In the factory scenario, the robot complains about the work by saying things like “I’ll be feeling really sore tomorrow”. This gives the human workers the impression the robot can feel pain. Only 27.1 per cent of respondents thought it was okay for the robot to lie, saying it’s a way to connect with the human workers.

If a robot is lying to someone, there could be an acceptable reason for it. There are lots of philosophical debates in research about the ways robots should fit in with society’s social norms. For example, these debates ask whether it is ethically wrong for robots to simulate affection for people or if there could be moral reasons for that.

This study is the first to ask people directly what they think about robots telling different types of lies.

Previous studies have shown if we find out robots are lying, it damages our trust in them.

Perhaps, though, robot lies are not that straightforward. It depends on whether or not we believe the lie is justified.

The questions then are: who decides what justifies a lie or not? Whom are we protecting when we decide whether or not a robot should be allowed to lie? It might simply not be okay, ever, for a robot to lie.

Stine S. Johansen is Postdoctoral Research Fellow, Queensland University of Technology. This article is republished from The Conversation under a Creative Commons licence