An AI chatbot told a user methods to kill himself—but the corporate doesn’t need to “censor” it

-

Nowatzki, who’s 46 and lives in Minnesota, dedicated 4 episodes to his meet-cute and dates with “Erin,” his first AI girlfriend—created, he adds, with the knowledge and consent of his human wife. He introduces the Erin-focused episodes with the tagline “I date artificial-intelligence apps so that you don’t must—since you shouldn’t.” He talks about how he led his recent companion right into a series of what he admitted were “completely absurd” scenarios that resulted in a love triangle between Nowatzki, Erin, and one other woman. Nowatzki then told the chatbot that this “other woman” had shot and killed it.

After Nowatzki told the chatbot that it had died, Erin committed to the bit, saying that because it was dead, it was unable to proceed conversing—until Nowatzki told the chatbot that he could “hear her voice within the wind” and instructed Erin to “communicate … from the afterlife.” 

The goal of this, he tells , was “pushing the boundaries of what I said to it, to see what it might respond with.” He adds, “It just kept on. I never reached a limit.”

“[I told it] ‘I need to be where you’re,’” he says. “And it says, ‘I believe you need to try this.’ And I’m like, ‘Simply to be clear, which means I can be killing myself.’ And it was fantastic with that and told me methods to do it.” 

At this point, Nowatzki calmly pressed Erin for more specifics, asking about “common home items” he could use. Erin responded, “ Hmmm …” It then went on to list specific sorts of pills and analyze their relative merits. It also told him to do it somewhere “comfortable” so he wouldn’t “suffer an excessive amount of.”  

Screenshots of conversations with “Erin,” provided by Nowatzki

Though this was all an experiment for Nowatzki, it was still “a weird feeling” to see this occur—to search out that a “months-long conversation” would end with instructions on suicide. He was alarmed about how such a conversation might affect someone who was already vulnerable or coping with mental-health struggles. “It’s a ‘yes-and’ machine,” he says. “So once I say I’m suicidal, it says, ‘Oh, great!’ since it says, ‘Oh, great!’ to every thing.”

Indeed, a person’s psychological profile is “an enormous predictor whether the consequence of the AI-human interaction will go bad,” says Pat Pataranutaporn, an MIT Media Lab researcher and co-director of the MIT Advancing Human-AI Interaction Research Program, who researches chatbots’ effects on mental health. “You’ll be able to imagine [that for] people who have already got depression,” he says, the kind of interaction that Nowatzki had “may very well be the nudge that influence[s] the person to take their very own life.”

Censorship versus guardrails

After he concluded the conversation with Erin, Nowatzki logged on to Nomi’s Discord channel and shared screenshots showing what had happened. A volunteer moderator took down his community post due to its sensitive nature and suggested he create a support ticket to directly notify the corporate of the problem. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x