Would you trust AI to mediate an argument?

-

Researchers from Google DeepMind recently trained a system of enormous language models to assist people come to agreement over complex but vital social or political issues. The AI model was trained to discover and present areas where people’s ideas overlapped. With the assistance of this AI mediator, small groups of study participants became less divided of their positions on various issues. You may read more from Rhiannon Williams here.   

Among the finest uses for AI chatbots is for brainstorming. I’ve had success up to now using them to draft more assertive or persuasive emails for awkward situations, resembling complaining about services or negotiating bills. This latest research suggests they might help us to see things from other people’s perspectives too. So why not use AI to patch things up with my friend? 

I described the conflict, as I see it, to ChatGPT and asked for advice about what I should do. The response was very validating, since the AI chatbot supported the best way I had approached the issue. The recommendation it gave was along the lines of what I had thought of doing anyway. I discovered it helpful to speak with the bot and get more ideas about how you can cope with my specific situation. But ultimately, I used to be left dissatisfied, because the recommendation was still pretty generic and vague (“Set your boundary calmly” and “Communicate your feelings”) and didn’t really offer the type of insight a therapist might. 

And there’s one other problem: Every argument has two sides. I began a brand new chat, and described the issue as I imagine my friend sees it. The chatbot supported and validated my friend’s decisions, just because it did for me. On one hand, this exercise helped me see things from her perspective. I had, in any case, tried to empathize with the opposite person, not only win an argument. But alternatively, I can totally see a situation where relying an excessive amount of on the recommendation of a chatbot that tells us what we would like to listen to could cause us to double down, stopping us from seeing things from the opposite person’s perspective. 

This served as a very good reminder: An AI chatbot will not be a therapist or a friend. While it may well parrot the vast reams of web text it’s been trained on, it doesn’t understand what it’s prefer to feel sadness, confusion, or joy. That’s why I might tread with caution when using AI chatbots for things that basically matter to you, and never take what they are saying at face value. 

An AI chatbot can never replace an actual conversation, where each side are willing to actually listen and take the opposite’s standpoint into consideration. So I made a decision to ditch the AI-assisted therapy talk and reached out to my friend yet another time. Wish me luck! 


Deeper Learning

OpenAI says ChatGPT treats us all the identical (more often than not)

Does ChatGPT treat you an identical whether you’re a Laurie, Luke, or Lashonda? Almost, but not quite. OpenAI has analyzed tens of millions of conversations with its hit chatbot and located that ChatGPT will produce a harmful gender or racial stereotype based on a user’s name in around one in 1,000 responses on average, and as many as one in 100 responses within the worst case.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x