With the launch of GPT-5, OpenAI has begun explicitly telling people to make use of its models for health advice. On the launch event, Altman welcomed on stage Felipe Millon, an OpenAI worker, and his wife, Carolina Millon, who had recently been diagnosed with multiple types of cancer. Carolina spoke about asking ChatGPT for help together with her diagnoses, saying that she had uploaded copies of her biopsy results to ChatGPT to translate medical jargon and asked the AI for help making decisions about things like whether or to not pursue radiation. The trio called it an empowering example of shrinking the knowledge gap between doctors and patients.
With this alteration in approach, OpenAI is wading into dangerous waters.
For one, it’s using evidence that doctors can profit from AI as a clinical tool, as within the Kenya study, to suggest that individuals with none medical background should ask the AI model for advice about their very own health. The issue is that numerous people might ask for this recommendation without ever running it by a physician (and are less prone to achieve this now that the chatbot rarely prompts them to).
Indeed, two days before the launch of GPT-5, the published a paper a few man who stopped eating salt and started ingesting dangerous amounts of bromide following a conversation with ChatGPT. He developed bromide poisoning—which largely disappeared within the US after the Food and Drug Administration began curbing using bromide in over-the-counter medications within the Seventies—after which nearly died, spending weeks within the hospital.
So what’s the purpose of all this? Essentially, it’s about accountability. When AI corporations move from promising general intelligence to offering humanlike helpfulness in a particular field like health care, it raises a second, yet unanswered query about what is going to occur when mistakes are made. As things stand, there’s little indication tech corporations shall be made responsible for the harm caused.
“When doctors offer you harmful medical advice resulting from error or prejudicial bias, you possibly can sue them for malpractice and get recompense,” says Damien Williams, an assistant professor of information science and philosophy on the University of North Carolina Charlotte.
“When ChatGPT gives you harmful medical advice since it’s been trained on prejudicial data, or because ‘hallucinations’ are inherent within the operations of the system, what’s your recourse?”