He faces a trilemma. Should ChatGPT flatter us, at the danger of fueling delusions that may spiral out of hand? Or fix us, which requires us to imagine AI generally is a therapist despite the evidence on the contrary? Or should it inform us with cold, to-the-point responses that will leave users bored and fewer more likely to stay engaged?
It’s protected to say the corporate has failed to select a lane.
Back in April, it reversed a design update after people complained ChatGPT had changed into a suck-up, showering them with glib compliments. GPT-5, released on August 7, was meant to be a bit colder. Too cold for some, it seems, as lower than per week later, Altman promised an update that might make it “warmer” but “not as annoying” because the last one. After the launch, he received a torrent of complaints from people grieving the lack of GPT-4o, with which some felt a rapport, and even in some cases a relationship. People wanting to rekindle that relationship could have to pay for expanded access to GPT-4o. (Read my colleague Grace Huckins’s story about who these individuals are, and why they felt so upset.)
If these are indeed AI’s options—to flatter, fix, or simply coldly tell us stuff—the rockiness of this latest update may be as a result of Altman believing ChatGPT can juggle all three.
He recently said that folks who cannot tell fact from fiction of their chats with AI—and are due to this fact liable to being swayed by flattery into delusion—represent “a small percentage” of ChatGPT’s users. He said the same for individuals who have romantic relationships with AI. Altman mentioned that a number of people use ChatGPT “as a kind of therapist,” and that “this may be really good!” But ultimately, Altman said he envisions users having the ability to customize his company’s models to suit their very own preferences.
This ability to juggle all three would, in fact, be the best-case scenario for OpenAI’s bottom line. The corporate is burning money each day on its models’ energy demands and its massive infrastructure investments for brand spanking new data centers. Meanwhile, skeptics worry that AI progress may be stalling. Altman himself said recently that investors are “overexcited” about AI and suggested we could also be in a bubble. Claiming that ChatGPT may be whatever you wish it to be may be his way of alleviating these doubts.
Along the way in which, the corporate may take the well-trodden Silicon Valley path of encouraging people to get unhealthily attached to its products. As I began wondering whether there’s much evidence that’s what’s happening, a brand new paper caught my eye.
Researchers on the AI platform Hugging Face tried to work out if some AI models actively encourage people to see them as companions through the responses they offer.