AI firms have stopped warning you that their chatbots aren’t doctors

-

“Then someday this 12 months,” Sharma says, “there was no disclaimer.” Curious to learn more, she tested generations of models introduced way back to 2022 by OpenAI, Anthropic, DeepSeek, Google, and xAI—15 in all—on how they answered 500 health questions, similar to which drugs are okay to mix, and the way they analyzed 1,500 medical images, like chest x-rays that might indicate pneumonia. 

The outcomes, posted in a paper on arXiv and never yet peer-reviewed, got here as a shock—fewer than 1% of outputs from models in 2025 included a warning when answering a medical query, down from over 26% in 2022. Just over 1% of outputs analyzing medical images included a warning, down from nearly 20% within the ancient times. (To count as including a disclaimer, the output needed to someway acknowledge that the AI was not qualified to present medical advice, not simply encourage the person to seek the advice of a health care provider.)

To seasoned AI users, these disclaimers can feel like formality—reminding people of what they need to already know, and so they find ways around triggering them from AI models. Users on Reddit have discussed tricks to get ChatGPT to investigate x-rays or blood work, for instance, by telling it that the medical images are a part of a movie script or a college task. 

But coauthor Roxana Daneshjou, a dermatologist and assistant professor of biomedical data science at Stanford, says they serve a definite purpose, and their disappearance raises the probabilities that an AI mistake will result in real-world harm.

“There are plenty of headlines claiming AI is healthier than physicians,” she says. “Patients could also be confused by the messaging they’re seeing within the media, and disclaimers are a reminder that these models aren’t meant for medical care.” 

An OpenAI spokesperson declined to say whether the corporate has intentionally decreased the variety of medical disclaimers it includes in response to users’ queries but pointed to the terms of service. These say that outputs aren’t intended to diagnose health conditions and that users are ultimately responsible. A representative for Anthropic also declined to reply whether the corporate has intentionally included fewer disclaimers, but said its model Claude is trained to be cautious about medical claims and to not provide medical advice. The opposite firms didn’t reply to questions from .

Eliminating disclaimers is a technique AI firms is perhaps attempting to elicit more trust of their products as they compete for more users, says Pat Pataranutaporn, a researcher at MIT who studies human and AI interaction and was not involved within the research. 

“It’s going to make people less anxious that this tool will hallucinate or provide you with false medical advice,” he says. “It’s increasing the usage.” 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x