So long as there was AI, there have been people sounding alarms about what it’d do to us: rogue superintelligence, mass unemployment, or environmental spoil from data center sprawl. But this week showed that one other threat entirely—that of youngsters forming unhealthy bonds with AI—is the one pulling AI safety out of the tutorial fringe and into regulators’ crosshairs.
This has been bubbling for some time. Two high-profile lawsuits filed within the last yr, against Character.AI and OpenAI, allege that companion-like behavior of their models contributed to the suicides of two teenagers. A study by US nonprofit Common Sense Media, published in July, found that 72% of teenagers have used AI for companionship. Stories in reputable outlets about “AI psychosis” have highlighted how countless conversations with chatbots can lead people down delusional spirals.
It’s hard to overstate the impact of those stories. To the general public, they’re proof that AI isn’t merely imperfect, but a technology that’s more harmful than helpful. In the event you doubted that this outrage could be taken seriously by regulators and firms, three things happened this week that may change your mind.
A California law passes the legislature
On Thursday, the California state legislature passed a first-of-its-kind bill. It might require AI corporations to incorporate reminders for users they know to be minors that responses are AI generated. Corporations would also must have a protocol for addressing suicide and self-harm and supply annual reports on instances of suicidal ideation in users’ conversations with their chatbots. It was led by Democratic state senator Steve Padilla, passed with heavy bipartisan support, and now awaits Governor Gavin Newsom’s signature.
There are reasons to be skeptical of the bill’s impact. It doesn’t specify efforts corporations should take to discover which users are minors, and a lot of AI corporations already include referrals to crisis providers when someone is talking about suicide. (Within the case of Adam Raine, certainly one of the teenagers whose survivors are suing, his conversations with ChatGPT before his death included one of these information, however the chatbot allegedly went on to give advice related to suicide anyway.)
Still, it’s undoubtedly essentially the most significant of the efforts to rein in companion-like behaviors in AI models, that are within the works in other states too. If the bill becomes law, it might strike a blow to the position OpenAI has taken, which is that “America leads best with clear, nationwide rules, not a patchwork of state or local regulations,” as the corporate’s chief global affairs officer, Chris Lehane, wrote on LinkedIn last week.
The Federal Trade Commission takes aim
The exact same day, the Federal Trade Commission announced an inquiry into seven corporations, in search of details about how they develop companion-like characters, monetize engagement, measure and test the impact of their chatbots, and more. The businesses are Google, Instagram, Meta, OpenAI, Snap, X, and Character Technologies, the maker of Character.AI.
The White House now wields immense, and potentially illegal, political influence over the agency. In March, President Trump fired its lone Democratic commissioner, Rebecca Slaughter. In July, a federal judge ruled that firing illegal, but last week the US Supreme Court temporarily permitted the firing.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” said FTC chairman Andrew Ferguson in a press release in regards to the inquiry.
At once, it’s just that—an inquiry—but the method might (depending on how public the FTC makes its findings) reveal the inner workings of how the businesses construct their AI companions to maintain users coming back repeatedly.
Sam Altman on suicide cases
on the identical day (a busy day for AI news), Tucker Carlson published an hour-long interview with OpenAI’s CEO, Sam Altman. It covers lots of ground—Altman’s battle with Elon Musk, OpenAI’s military customers, conspiracy theories in regards to the death of a former worker—but it surely also includes essentially the most candid comments Altman’s made to date in regards to the cases of suicide following conversations with AI.
Altman talked about “the stress between user freedom and privacy and protecting vulnerable users” in cases like these. But then he offered up something I hadn’t heard before.
“I believe it’d be very reasonable for us to say that in cases of young people talking about suicide seriously, where we cannot get in contact with parents, we do call the authorities,” he said. “That will be a change.”
So where does all this go next? For now, it’s clear that—no less than within the case of kids harmed by AI companionship—corporations’ familiar playbook won’t hold. They’ll now not deflect responsibility by leaning on privacy, personalization, or “user alternative.” Pressure to take a harder line is mounting from state laws, regulators, and an outraged public.
But what is going to that seem like? Politically, the left and right are actually listening to AI’s harm to children, but their solutions differ. On the proper, the proposed solution aligns with the wave of web age-verification laws which have now been passed in over 20 states. These are supposed to shield kids from adult content while defending “family values.” On the left, it’s the revival of stalled ambitions to carry Big Tech accountable through antitrust and consumer-protection powers.
Consensus on the issue is less complicated than agreement on the cure. Because it stands, it looks likely we’ll find yourself with precisely the patchwork of state and native regulations that OpenAI (and lots of others) have lobbied against.
For now, it’s right down to corporations to choose where to attract the lines. They’re having to choose things like: Should chatbots cut off conversations when users spiral toward self-harm, or would that leave some people worse off? Should they be licensed and controlled like therapists, or treated as entertainment products with warnings? The uncertainty stems from a basic contradiction: Corporations have built chatbots to act like caring humans, but they’ve postponed developing the standards and accountability we demand of real caregivers. The clock is now running out.