Chatbots are expert at crafting sophisticated dialogue and mimicking empathetic behavior. They never get bored with chatting. It’s no wonder, then, that so many individuals now use them for companionship—forging friendships and even romantic relationships.
Based on a study from the nonprofit Common Sense Media, 72% of US teenagers have used AI for companionship. Although some large language models are designed to act as companions, individuals are increasingly pursuing relationships with general-purpose models like ChatGPT— something OpenAI CEO Sam Altman has expressed approval for. And while chatbots can provide much-needed emotional support and guidance for some people, they’ll exacerbate underlying problems in others. Conversations with chatbots have been linked to AI-induced delusions, reinforced false and sometimes dangerous beliefs, and led people to assume they’ve unlocked hidden knowledge.
And it gets much more worrying. Families pursuing lawsuits against OpenAI and Character.AI allege that the companion-like behavior of their models contributed to the suicides of two teenagers. And recent cases have emerged since: The Social Media Victims Law Center filed three lawsuits against Character.AI in September 2025, and seven complaints were brought against OpenAI in November 2025.
We’re starting to see the beginning of efforts to control AI companions and curb problematic usage. In September, the governor of California signed into law a brand new algorithm that may force the most important AI firms to publicize what they’re doing to maintain users protected. Similarly, OpenAI introduced parental controls into ChatGPT and is working on a new edition of the chatbot specifically for teenagers, which it guarantees may have more guardrails. So while AI companionship is unlikely to go away anytime soon, its future is looking increasingly regulated.
