Home Artificial Intelligence I’m an ER doctor. Here’s how I’m already using ChatGPT to assist treat patients.

I’m an ER doctor. Here’s how I’m already using ChatGPT to assist treat patients.

0
I’m an ER doctor. Here’s how I’m already using ChatGPT to assist treat patients.

Dr. Tamayo-Sarver on duty

There are numerous futuristic possibilities for a way AI might transform healthcare. But it could actually help doctors do a greater job in small, protected ways — straight away.

With explosive interest in ChatGPT and other large language model programs, many AI evangelists are touting their use as a diagnostic tool in healthcare — or even broadly declaring it to be the longer term of drugs. As with many high tech innovations, proclamations like this could often overshadow small but crucial applications which might be immediately in our grasp.

I say that as a working ER doctor who’s using ChatGPT to assist treat patients straight away.

But not for diagnosis. As I recently wrote and expanded on for Inflect’s podcast, ChatGPT works fairly well as a diagnostic assistant — but only for those who feed it perfect information, and the actual patient has a classic presentation, which is never the case. (And also you don’t mind a 50% success rate that usually misses life-threatening conditions.)

A more immediate utility of ChatGPT in medicine was dramatized to me at around 3 a.m. during a recent night shift, treating a 96-year-old with dementia who had difficulty respiration due to fluid in her lungs. Adding to this challenge, her three children (all of their seventies) were also within the ER room.

Understandably concerned, they hovered around my staff and me, to the purpose where their constant barrage of requests was actually slowing down treatment. To compound the issue, they were absolutely certain what their mother needed.

“Nurse, our mother needs IV fluids to treat her dehydration. Please hurry!” All three of them repeated variations of that request to us for over an hour.

Small problem: Administering IV fluids would have made her much worse and even killed her.

“I do know that you simply are concerned about your mom,” I attempted explaining to them. “But she cannot breathe straight away because she has pulmonary edema, which is fluid in her lungs. If I hydrate her with IV fluids, it can make her pulmonary edema worse and she or he might die. Once we’ve got the fluid out of her lungs and respiration higher, then we are able to worry about her being dehydrated.”

“But every time she is sick, she just needs an IV due to dehydration,” the patient’s son insisted, adamant. “Why don’t you simply give her some IV fluid? She will probably be higher very quickly.”

I attempted to rephrase my explanation in multiple other ways, but judging by their blank expressions, none were resonating. This is definitely a typical situation within the ER. People don’t get up planning on an emergency that brings them to me at nighttime, and are sometimes in a decompensated emotional state.

To make matters worse, several other patients were in additional immediate need of my attention.

Desperate for an answer, I went down the hall to my computer, and fired up ChatGPT-4. Typing in:

“Explain why you wouldn’t give IV fluids to someone with severe pulmonary edema and respiratory distress despite the fact that you is perhaps concerned that the patient is dehydrated. Explain it in easy and compassionate terms in order that a confused one that cares about their mother can understand.”

Inside seconds, ChatGPT produced this:

I actually understand how much you look after your mother, and it’s natural to feel concerned about her well-being.

In cases of severe pulmonary edema and respiratory distress, it would appear to be giving IV fluids would help with hydration, however it could actually make her condition worse.

When someone has pulmonary edema, their lungs are full of excess fluid, which makes respiration difficult. If we were to provide your mother IV fluids, it could potentially increase the fluid in her lungs, making it even harder for her to breathe and causing her more discomfort.

Our primary goal is to make sure your mother’s comfort and well-being. The medical team is devoted to providing the most effective possible look after her, and they’re going to closely monitor her hydration levels while using other treatments which might be more appropriate for her specific condition.

Please know that your mother’s health and luxury are our top priorities, and we’re here to support each of you during this difficult time. If you might have any questions or concerns, don’t hesitate to succeed in out to the medical team.

We’re all on this together, and we’ll do the whole lot we are able to to assist your mother recuperate.

I printed this response up, and skim it to the concerned family. As I recited ChatGPT’s words, their agitated expressions immediately melted into calm agreeability.

Even higher, I handed the printed copy to the nurse. While the patient was waiting for an ICU bed, the family would continuously return to the nurse to ask about IV fluids (some members could have been fighting memory impairment), and he’d read the script to them. It all the time received a warm reception.

Just as key, the family quietly stepped back to let my staff deal with their work. I could then devote my time with the opposite patients who were much more sick, and manage the chaos for the remainder of the night shift.

Since this incident, I’ve taken to using ChatGPT to assist empathically explain specific medical scenarios to patients and their family members. It’s develop into a useful resource for the frequent situations where my ER ward is just too busy or short-staffed for explaining complex medical diagnoses in a way that’s accurate but easy to grasp.

In doing so, I’ve come to understand that coping with ChatGPT is like working with an incredibly sensible, hard-working — and sometimes hungover — intern. That’s develop into my mental model for considering the usefulness of ChatGPT.

Now, for any potential application, I feel, “Would a dedicated but occasionally hungover intern working on this make things easier for me and my staff — or would the work required managing them find yourself being more effort than simply doing it without their involvement?”

Seen from that perspective, ChatGPT or a hungover intern can still, as an illustration:

  • Take down my patient’s history
  • Create long-form written communication for patients and staff
  • Explain highly technical information to patients simply with empathy and compassion

In each case, the output from the hungover intern/ChatGPT must be rigorously checked before it’s used. But in these scenarios, reviewing existing work will likely be much faster than ranging from scratch. As for my actual human (and hopefully not hungover) interns, ChatGPT would then free them as much as deal with the activity needed most: caring for patients.

As an advocate of expanding usage of AI in healthcare, my hope is the industry focuses on more immediately achievable applications like this, versus long-term prospects during which artificial intelligence completely transforms medicine.

Many or most of those futuristic scenarios overlook the sensible challenges of implementation. Satisfying HIPAA rules around patient privacy alone may take a few years, or a long time, to resolve themselves before we could even contemplate directly using programs like ChatGPT in a medical theater.

Indirect uses of ChatGPT, nevertheless, are sometimes ready for implementation now, and are desperately needed — especially with physician burnout levels at an all-time high. Already constrained by the jiffy we’re allotted to see each patient in person, we’d like solutions that expand and enrich the doctor-patient relationship.

These time constraints lead on to a typical grievance: “Doctor speak,” during which highly technical explanations for symptoms and diagnoses overwhelm and even confuse patients, reasonably than make them feel cared for. All too often, physicians and their staff are too overwhelmed themselves to clarify a medical situation in patient, empathic, plain language.

In accordance with the CDC, emergency rooms across the U.S. see over 131 million visits per 12 months. Assuming, for the sake of argument, that using ChatGPT to eliminate “doctor speak” were scaled to ERs across the country, and it saved a median of 5 minutes per patient, that will translate to well over 10 million hours of time saved per 12 months — hours that physicians and their staff could then devote to more hands-on look after more patients.

There are still workflow challenges with these approaches. Currently, anything entered into ChatGPT is basically published to the online; to be consistent with HIPAA, any aspects that will enable linkage to a selected patient would should be removed or altered.

Moreover, systems that put guardrails on ChatGPT’s direct interaction with patients, within the case of medical history taking (for instance), would should be developed in order that the hungover-intern aspect doesn’t start producing catastrophic advice. But these aren’t insurmountable challenges, and the payoff is real.

I’m just a little embarrassed to confess that I actually have learned higher ways of explaining things to my very own patients from ChatGPT’s suggested responses. But I’m also greatly appreciative for the increased human connection I feel from a patient who understands what I’m doing for them, and why.

There’s numerous hype about ChatGPT and other large language models taking away physician’s jobs due to their massive knowledge base. They won’t. But in a curious irony, my ER staff and I are capable of devote much more time to the human equation of healthcare, due to artificial intelligence.

Originally published here in FastCompany

Dr. Josh Tamayo-Sarver works clinically within the emergency department of his local people and is a vp of innovation at Inflect Health, an innovation incubator for health tech.

LEAVE A REPLY

Please enter your comment!
Please enter your name here