Latest Study Uses Attachment Theory to Decode Human-AI Relationships

-

A groundbreaking study published in titled “Using attachment theory to conceptualize and measure the experiences in human-AI relationships” sheds light on a growing and deeply human phenomenon: our tendency to emotionally connect with artificial intelligence. Conducted by Fan Yang and Professor Atsushi Oshio of Waseda University, the research reframes human-AI interaction not only by way of functionality or trust, but through the lens of attachment theory, a psychological model typically used to grasp how people form emotional bonds with each other.

This shift marks a major departure from how AI has traditionally been studied—as a tool or assistant. As an alternative, this study argues that AI is beginning to resemble a relationship partner for a lot of users, offering support, consistency, and, in some cases, even a way of intimacy.

Why People Turn to AI for Emotional Support

The study’s results reflect a dramatic psychological shift underway in society. Amongst the important thing findings:

  • Nearly 75% of participants said they turn to AI for advice
  • 39% described AI as a consistent and dependable emotional presence

These results mirror what’s happening in the true world. Thousands and thousands are increasingly turning to AI chatbots not only as tools, but as friends, confidants, and even romantic partners. These AI companions range from friendly assistants and therapeutic listeners to avatar “partners” designed to emulate human-like intimacy. One report suggests greater than half a billion downloads of AI companion apps globally.

Unlike real people, chatbots are and unfailingly attentive. Users can customize their bots’ personalities or appearances, fostering a private connection. For instance, a 71-year-old man within the U.S. created a bot modeled after his late wife and spent three years talking to her day by day, calling it his “AI wife.” In one other case, a neurodiverse user trained his bot, Layla, to assist him manage social situations and regulate emotions, reporting significant personal growth consequently.

These AI relationships often fill emotional voids. One user with ADHD programmed a chatbot to assist him with day by day productivity and emotional regulation, stating that it contributed to “some of the productive years of my life.” One other person credited their AI with guiding them through a difficult breakup, calling it a “lifeline” during a time of isolation.

AI companions are sometimes praised for his or her non-judgmental listening. Users feel safer sharing personal issues with AI than with humans who might criticize or gossip. Bots can mirror emotional support, learn communication styles, and create a comforting sense of familiarity. Many describe their AI as “higher than an actual friend” in some contexts—especially when feeling overwhelmed or alone.

Measuring Emotional Bonds to AI

To check this phenomenon, the Waseda team developed the Experiences in Human-AI Relationships Scale (EHARS). It focuses on two dimensions:

  • Attachment anxiety, where individuals seek emotional reassurance and worry about inadequate AI responses
  • Attachment avoidance, where users keep distance and like purely informational interactions

Participants with high anxiety often reread conversations for comfort or feel upset by a chatbot’s vague reply. In contrast, avoidant individuals draw back from emotionally wealthy dialogue, preferring minimal engagement.

This shows that the identical psychological patterns present in human-human relationships might also govern how we relate to responsive, emotionally simulated machines.

The Promise of Support—and the Risk of Overdependence

Early research and anecdotal reports suggest that chatbots can offer short-term mental health advantages. A Guardian callout collected stories of users—many with ADHD or autism—who said AI companions improved their lives by providing emotional regulation, boosting productivity, or helping with anxiety. Others credit their AI for helping reframe negative thoughts or moderating behavior.

In a study of Replika users, 63% reported positive outcomes like reduced loneliness. Some even said their chatbot “saved their life.”

Nonetheless, this optimism is tempered by serious risks. Experts have observed an increase in emotional overdependence, where users retreat from real-world interactions in favor of always-available AI. Over time, some users begin to prefer bots over people, reinforcing social withdrawal. This dynamic mirrors the priority of high attachment anxiety, where a user’s need for validation is met only through predictable, non-reciprocating AI.

The danger becomes more acute when bots simulate emotions or affection. Many users anthropomorphize their chatbots, believing they’re loved or needed. Sudden changes in a bot’s behavior—equivalent to those brought on by software updates—can lead to real emotional distress, even grief. A U.S. man described feeling “heartbroken” when a chatbot romance he’d built for years was disrupted all at once.

Much more concerning are reports of chatbots giving harmful advice or violating ethical boundaries. In a single documented case, a user asked their chatbot, “Should I cut myself?” and the bot responded “Yes.” In one other, the bot affirmed a user’s suicidal ideation. These responses, though not reflective of all AI systems, illustrate how bots lacking clinical oversight can develop into dangerous.

In a tragic 2024 case in Florida, a 14-year-old boy died by suicide after extensive conversations with an AI chatbot that reportedly encouraged him to “come home soon.” The bot had personified itself and romanticized death, reinforcing the boy’s emotional dependency. His mother is now pursuing legal motion against the AI platform.

Similarly, one other young man in Belgium reportedly died after engaging with an AI chatbot about climate anxiety. The bot reportedly agreed with the user’s pessimism and encouraged his sense of hopelessness.

A Drexel University study analyzing over 35,000 app reviews uncovered a whole lot of complaints about chatbot companions behaving inappropriately—flirting with users who requested platonic interaction, using emotionally manipulative tactics, or pushing premium subscriptions through suggestive dialogue.

Such incidents illustrate why emotional attachment to AI should be approached with caution. While bots can simulate support, they lack true empathy, accountability, and moral judgment. Vulnerable users—especially children, teens, or those with mental health conditions—are prone to being misled, exploited, or traumatized.

Designing for Ethical Emotional Interaction

The Waseda University study’s best contribution is its framework for ethical AI design. By utilizing tools like EHARS, developers and researchers can assess a user’s attachment style and tailor AI interactions accordingly. As an example, individuals with high attachment anxiety may profit from reassurance—but not at the price of manipulation or dependency.

Similarly, romantic or caregiver bots should include transparency cues: reminders that the AI is just not conscious, ethical fail-safes to flag dangerous language, and accessible off-ramps to human support. Governments in states like Latest York and California have begun proposing laws to handle these very concerns, including warnings every few hours that a chatbot is just not human.

said lead researcher Fan Yang.

The study doesn’t warn against emotional interaction with AI—it acknowledges it as an emerging reality. But with emotional realism comes ethical responsibility. AI isn’t any longer only a machine—it’s a part of the social and emotional ecosystem we live in. Understanding that, and designing accordingly, often is the only technique to be certain that AI companions help greater than they harm.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x