Home Artificial Intelligence Is AI in the attention of the beholder?

Is AI in the attention of the beholder?

1
Is AI in the attention of the beholder?

Someone’s prior beliefs about a man-made intelligence agent, like a chatbot, have a big effect on their interactions with that agent and their perception of its trustworthiness, empathy, and effectiveness, in response to a recent study.

Researchers from MIT and Arizona State University found that priming users — by telling them that a conversational AI agent for mental health support was either empathetic, neutral, or manipulative — influenced their perception of the chatbot and shaped how they communicated with it, despite the fact that they were talking to the very same chatbot.

Most users who were told the AI agent was caring believed that it was, and in addition they gave it higher performance rankings than those that believed it was manipulative. At the identical time, lower than half of the users who were told the agent had manipulative motives thought the chatbot was actually malicious, indicating that individuals may attempt to “see the nice” in AI the identical way they do of their fellow humans.

The study revealed a feedback loop between users’ mental models, or their perception of an AI agent, and that agent’s responses. The sentiment of user-AI conversations became more positive over time if the user believed the AI was empathetic, while the alternative was true for users who thought it was nefarious.

“From this study, we see that to some extent, the AI is the AI of the beholder,” says Pat Pataranutaporn, a graduate student within the Fluid Interfaces group of the MIT Media Lab and co-lead writer of a paper describing this study. “Once we describe to users what an AI agent is, it does not only change their mental model, it also changes their behavior. And because the AI responds to the user, when the person changes their behavior, that changes the AI, as well.”

Pataranutaporn is joined by co-lead writer and fellow MIT graduate student Ruby Liu; Ed Finn, associate professor within the Center for Science and Imagination at Arizona State University; and senior writer Pattie Maes, professor of media technology and head of the Fluid Interfaces group at MIT.

The study, published today in , highlights the importance of studying how AI is presented to society, because the media and popular culture strongly influence our mental models. The authors also raise a cautionary flag, because the same kinds of priming statements on this study may very well be used to deceive people about an AI’s motives or capabilities.

“A number of people consider AI as only an engineering problem, however the success of AI can be a human aspects problem. The way in which we discuss AI, even the name that we give it in the primary place, can have an unlimited impact on the effectiveness of those systems while you put them in front of individuals. Now we have to think more about these issues,” Maes says.

AI friend or foe?

On this study, the researchers sought to find out how much of the empathy and effectiveness people see in AI is predicated on their subjective perception and the way much is predicated on the technology itself. In addition they desired to explore whether one could manipulate someone’s subjective perception with priming.

“The AI is a black box, so we are inclined to associate it with something else that we will understand. We make analogies and metaphors. But what’s the fitting metaphor we will use to take into consideration AI? The reply isn’t straightforward,” Pataranutaporn says.

They designed a study through which humans interacted with a conversational AI mental health companion for about half-hour to find out whether or not they would recommend it to a friend, after which rated the agent and their experiences. The researchers recruited 310 participants and randomly split them into three groups, which were each given a priming statement in regards to the AI.

One group was told the agent had no motives, the second group was told the AI had benevolent intentions and cared in regards to the user’s well-being, and the third group was told the agent had malicious intentions and would attempt to deceive users. While it was difficult to choose only three primers, the researchers selected statements they thought fit essentially the most common perceptions about AI, Liu says.

Half the participants in each group interacted with an AI agent based on the generative language model GPT-3, a robust deep-learning model that may generate human-like text. The opposite half interacted with an implementation of the chatbot ELIZA, a less sophisticated rule-based natural language processing program developed at MIT within the Nineteen Sixties.

Molding mental models

Post-survey results revealed that easy priming statements can strongly influence a user’s mental model of an AI agent, and that the positive primers had a greater effect. Only 44 percent of those given negative primers believed them, while 88 percent of those within the positive group and 79 percent of those within the neutral group believed the AI was empathetic or neutral, respectively.

“With the negative priming statements, fairly than priming them to imagine something, we were priming them to form their very own opinion. In case you tell someone to be suspicious of something, then they may just be more suspicious normally,” Liu says.

However the capabilities of the technology do play a job, because the effects were more significant for the more sophisticated GPT-3 based conversational chatbot.

The researchers were surprised to see that users rated the effectiveness of the chatbots in a different way based on the priming statements. Users within the positive group awarded their chatbots higher marks for giving mental health advice, despite the incontrovertible fact that all agents were similar.

Interestingly, in addition they saw that the sentiment of conversations modified based on how users were primed. Individuals who believed the AI was caring tended to interact with it in a more positive way, making the agent’s responses more positive. The negative priming statements had the alternative effect. This impact on sentiment was amplified because the conversation progressed, Maes adds.

The outcomes of the study suggest that because priming statements can have such a powerful impact on a user’s mental model, one could use them to make an AI agent seem more capable than it’s — which could lead users to put an excessive amount of trust in an agent and follow incorrect advice.

“Perhaps we must always prime people more to watch out and to grasp that AI agents can hallucinate and are biased. How we discuss AI systems will ultimately have a giant effect on how people reply to them,” Maes says.

In the longer term, the researchers wish to see how AI-user interactions can be affected if the agents were designed to counteract some user bias. As an illustration, perhaps someone with a highly positive perception of AI is given a chatbot that responds in a neutral or perhaps a barely negative way so the conversation stays more balanced.

In addition they wish to use what they’ve learned to reinforce certain AI applications, like mental health treatments, where it may very well be useful for the user to imagine an AI is empathetic. As well as, they wish to conduct a longer-term study to see how a user’s mental model of an AI agent changes over time.

This research was funded, partly, by the Media Lab, the Harvard-MIT Program in Health Sciences and Technology, Accenture, and KBTG. 

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here