You may think that such AI companionship bots—AI models with distinct “personalities” that may study you and act as a friend, lover, cheerleader, or more—appeal only to a fringe few, but that couldn’t be farther from the reality.
A latest research paper geared toward making such companions safer, by authors from Google DeepMind, the Oxford Web Institute, and others, lays this bare: Character.AI, the platform being sued by Garcia, says it receives 20,000 queries per second, which is a couple of fifth of the estimated search volume served by Google. Interactions with these companions last 4 times longer than the common time spent interacting with ChatGPT. One companion site I wrote about, which was hosting sexually charged conversations with bots imitating underage celebrities, told me its energetic users averaged greater than two hours per day conversing with bots, and that almost all of those users are members of Gen Z.
The design of those AI characters makes lawmakers’ concern well warranted. The issue: Companions are upending the paradigm that has to date defined the way in which social media firms have cultivated our attention and replacing it with something poised to be much more addictive.
Within the social media we’re used to, because the researchers indicate, technologies are mostly the mediators and facilitators of human connection. They supercharge our dopamine circuits, sure, but they accomplish that by making us crave approval and a focus from real people, delivered via algorithms. With AI companions, we’re moving toward a world where people perceive AI as a social actor with its own voice. The result will likely be just like the attention economy on steroids.
Social scientists say two things are required for people to treat a technology this fashion: It needs to provide us social cues that make us feel it’s price responding to, and it must have perceived agency, meaning that it operates as a source of communication, not merely a channel for human-to-human connection. Social media sites don’t tick these boxes. But AI companions, that are increasingly agentic and personalized, are designed to excel on each scores, making possible an unprecedented level of engagement and interaction.
In an interview with podcast host Lex Fridman, Eugenia Kuyda, the CEO of the companion site Replika, explained the appeal at the center of the corporate’s product. “If you happen to create something that’s at all times there for you, that never criticizes you, that at all times understands you and understands you for who you’re,” she said, “how are you going to not fall in love with that?”
So how does one construct the right AI companion? The researchers indicate three hallmarks of human relationships that individuals may experience with an AI: They grow depending on the AI, they see the actual AI companion as irreplaceable, and the interactions construct over time. The authors also indicate that one doesn’t must perceive an AI as human for these items to occur.
Now consider the method by which many AI models are improved: They’re given a transparent goal and “rewarded” for meeting that goal. An AI companionship model may be instructed to maximise the time someone spends with it or the quantity of private data the user reveals. This could make the AI companion way more compelling to talk with, on the expense of the human engaging in those chats.