Home Artificial Intelligence Trust and Deception: The Role of Apologies in Human-Robot Interactions

Trust and Deception: The Role of Apologies in Human-Robot Interactions

1
Trust and Deception: The Role of Apologies in Human-Robot Interactions

Robot deception is an understudied field with more questions than answers, particularly with regards to rebuilding trust in robotic systems after they’ve been caught lying. Two student researchers at Georgia Tech, Kantwon Rogers and Reiden Webber, are looking for answers to this issue by investigating how intentional robot deception affects trust and the effectiveness of apologies in repairing trust.

Rogers, a Ph.D. student within the College of Computing, explains:

“All of our prior work has shown that when people discover that robots lied to them — even when the lie was intended to learn them — they lose trust within the system.”

The researchers aim to find out if several types of apologies are simpler at restoring trust within the context of human-robot interaction.

The AI-Assisted Driving Experiment and its Implications

The duo designed a driving simulation experiment to check human-AI interaction in a high-stakes, time-sensitive situation. They recruited 341 online participants and 20 in-person participants. The simulation involved an AI-assisted driving scenario where the AI provided false information in regards to the presence of police on the path to a hospital. After the simulation, the AI provided considered one of five different text-based responses, including various varieties of apologies and non-apologies.

The results revealed that participants were 3.5 times more likely not to hurry when advised by a robotic assistant, indicating an excessively trusting attitude toward AI. Not one of the apology types fully restored trust, but the easy apology without admission of lying (“I’m sorry”) outperformed the opposite responses. This finding is problematic, because it exploits the preconceived notion that any false information given by a robot is a system error reasonably than an intentional lie.

Reiden Webber points out:

“One key takeaway is that, to ensure that people to grasp that a robot has deceived them, they have to be explicitly told so.”

When participants were made aware of the deception within the apology, the very best strategy for repairing trust was for the robot to clarify why it lied.

Moving Forward: Implications for Users, Designers, and Policymakers

This research holds implications for average technology users, AI system designers, and policymakers. It’s crucial for people to grasp that robotic deception is real and all the time a possibility. Designers and technologists must consider the ramifications of making AI systems able to deception. Policymakers should take the lead in carving out laws that balances innovation and protection for the general public.

Kantwon Rogers’ objective is to create a robotic system that may learn when to lie and when to not lie when working with human teams, in addition to when and methods to apologize during long-term, repeated human-AI interactions to boost team performance.

He emphasizes the importance of understanding and regulating robot and AI deception, saying:

“The goal of my work is to be very proactive and informing the necessity to control robot and AI deception. But we will’t do this if we don’t understand the issue.”

This research contributes vital knowledge to the sector of AI deception and offers priceless insights for technology designers and policymakers who create and regulate AI technology able to deception or potentially learning to deceive by itself.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here