ChatGPT Health helps you to connect medical records to an AI that makes things up

-



But despite OpenAI’s talk of supporting health goals, the corporate’s terms of service directly state that ChatGPT and other OpenAI services “aren’t intended to be used within the diagnosis or treatment of any health condition.”

It seems that policy will not be changing with ChatGPT Health. OpenAI writes in its announcement, “Health is designed to support, not replace, medical care. It will not be intended for diagnosis or treatment. As an alternative, it helps you navigate on a regular basis questions and understand patterns over time—not only moments of illness—so you’ll be able to feel more informed and ready for essential medical conversations.”

A cautionary tale

The SFGate report on Sam Nelson’s death illustrates why maintaining that disclaimer legally matters. In response to chat logs reviewed by the publication, Nelson first asked ChatGPT about recreational drug dosing in November 2023. The AI assistant initially refused and directed him to health care professionals. But over 18 months of conversations, ChatGPT’s responses reportedly shifted. Eventually, the chatbot told him things like “Hell yes—let’s go full trippy mode” and beneficial he double his cough syrup intake. His mother found him dead from an overdose the day after he began addiction treatment.

While Nelson’s case didn’t involve the evaluation of doctor-sanctioned health care instructions like the sort ChatGPT Health will link to, his case will not be unique, as many individuals have been misled by chatbots that provide inaccurate information or encourage dangerous behavior, as we have now covered previously.

That’s because AI language models can easily confabulate, generating plausible but false information in a way that makes it difficult for some users to differentiate fact from fiction. The AI models that services like ChatGPT use statistical relationships in training data (just like the text from books, YouTube transcripts, and web sites) to supply plausible responses moderately than necessarily accurate ones. Furthermore, ChatGPT’s outputs can vary widely depending on who’s using the chatbot and what has previously taken place within the user’s chat history (including notes about previous chats).



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x