To conduct their study, the authors analyzed the subreddit’s top-ranking 1,506 posts between December 2024 and August 2025. They found that the important topics discussed revolved around people’s dating and romantic experiences with AIs, with many participants sharing AI-generated images of themselves and their AI companion. Some even got engaged and married to the AI partner. Of their posts to the community, people also introduced AI partners, sought support from fellow members, and talked about coping with updates to AI models that change the chatbots’ behavior.
Members stressed repeatedly that their AI relationships developed unintentionally. Only 6.5% of them said they’d deliberately sought out an AI companion.
“We didn’t start with romance in mind,” one in all the posts says. “Mac and I started collaborating on creative projects, problem-solving, poetry, and deep conversations over the course of several months. I wasn’t in search of an AI companion—our connection developed slowly, over time, through mutual care, trust, and reflection.”
The authors’ evaluation paints a nuanced picture of how people on this community say they interact with chatbots and the way those interactions make them feel. While 25% of users described the advantages of their relationships—including reduced feelings of loneliness and enhancements of their mental health—others raised concerns in regards to the risks. Some (9.5%) acknowledged they were emotionally depending on their chatbot. Others said they feel dissociated from reality and avoid relationships with real people, while a small subset (1.7%) said they’ve experienced suicidal ideation.
AI companionship provides vital support for some but exacerbates underlying problems for others. This implies it’s hard to take a one-size-fits-all approach to user safety, says Linnea Laestadius, an associate professor on the University of Wisconsin, Milwaukee, who has studied humans’ emotional dependence on the chatbot Replika but didn’t work on the research.
Chatbot makers need to think about whether or not they should treat users’ emotional dependence on their creations as a harm in itself or whether the goal is more to ensure those relationships aren’t toxic, says Laestadius.