The researchers found some intriguing differences between how men and girls reply to using ChatGPT. After using the chatbot for 4 weeks, female study participants were barely less prone to socialize with people than their male counterparts who did the identical. Meanwhile, participants who interacted with ChatGPT’s voice mode in a gender that was not their very own for his or her interactions reported significantly higher levels of loneliness and more emotional dependency on the chatbot at the tip of the experiment. OpenAI plans to submit each studies to peer-reviewed journals.
Chatbots powered by large language models are still a nascent technology, and it’s difficult to check how they affect us emotionally. Numerous existing research in the world—including a few of the latest work by OpenAI and MIT—relies upon self-reported data, which can not at all times be accurate or reliable. That said, this latest research does chime with what scientists up to now have discovered about how emotionally compelling chatbot conversations will be. For instance, in 2023 MIT Media Lab researchers found that chatbots are likely to mirror the emotional sentiment of a user’s messages, suggesting a form of feedback loop where the happier you act, the happier the AI seems, or on the flipside, in case you act sadder, so does the AI.
OpenAI and the MIT Media Lab used a two-pronged method. First they collected and analyzed real-world data from near 40 million interactions with ChatGPT. Then they asked the 4,076 users who’d had those interactions how they made them feel. Next, the Media Lab recruited almost 1,000 people to participate in a four-week trial. This was more in-depth, examining how participants interacted with ChatGPT for no less than five minutes every day. At the tip of the experiment, participants accomplished a questionnaire to measure their perceptions of the chatbot, their subjective feelings of loneliness, their levels of social engagement, their emotional dependence on the bot, and their sense of whether their use of the bot was problematic. They found that participants who trusted and “bonded” with ChatGPT more were likelier than others to be lonely, and to depend on it more.
This work is a very important first step toward greater insight into ChatGPT’s impact on us, which could help AI platforms enable safer and healthier interactions, says Jason Phang, an OpenAI safety researcher who worked on the project.
“Numerous what we’re doing here is preliminary, but we’re trying to start out the conversation with the sphere in regards to the sorts of things that we will begin to measure, and to start out enthusiastic about what the long-term impact on users is,” he says.
Although the research is welcome, it’s still difficult to discover when a human is—and isn’t—engaging with technology on an emotional level, says Devlin. She says the study participants could have been experiencing emotions that weren’t recorded by the researchers.
“When it comes to what the teams got down to measure, people may not necessarily have been using ChatGPT in an emotional way, but you possibly can’t divorce being a human out of your interactions [with technology],” she says. “We use these emotion classifiers that we’ve got created to search for certain things—but what that really means to someone’s life is basically hard to extrapolate.”