Home Artificial Intelligence 3 Questions: Leo Anthony Celi on ChatGPT and medicine

3 Questions: Leo Anthony Celi on ChatGPT and medicine

3 Questions: Leo Anthony Celi on ChatGPT and medicine

Q: What do you think that the success of ChatGPT on the USMLE reveals concerning the nature of the medical education and evaluation of scholars? 

A: The framing of medical knowledge as something that may be encapsulated into multiple alternative questions creates a cognitive framing of false certainty. Medical knowledge is commonly taught as fixed model representations of health and disease. Treatment effects are presented as stable over time despite always changing practice patterns. Mechanistic models are passed on from teachers to students with little emphasis on how robustly those models were derived, the uncertainties that persist around them, and the way they have to be recalibrated to reflect advances worthy of incorporation into practice. 

ChatGPT passed an examination that rewards memorizing the components of a system moderately than analyzing how it really works, the way it fails, the way it was created, the way it is maintained. Its success demonstrates a number of the shortcomings in how we train and evaluate medical students. Critical pondering requires appreciation that ground truths in medicine continually shift, and more importantly, an understanding how and why they shift.

Q: What steps do you think that the medical community should take to change how students are taught and evaluated?  

A: Learning is about leveraging the present body of data, understanding its gaps, and looking for to fill those gaps. It requires being comfortable with and with the ability to probe the uncertainties. We fail as teachers by not teaching students tips on how to understand the gaps in the present body of data. We fail them after we preach certainty over curiosity, and hubris over humility.  

Medical education also requires being aware of the biases in the best way medical knowledge is created and validated. These biases are best addressed by optimizing the cognitive diversity throughout the community. Greater than ever, there may be a have to encourage cross-disciplinary collaborative learning and problem-solving. Medical students need data science skills that may allow every clinician to contribute to, continually assess, and recalibrate medical knowledge.

Q: Do you see any upside to ChatGPT’s success on this exam? Are there helpful ways in which ChatGPT and other types of AI can contribute to the practice of drugs? 

A: There isn’t a query that enormous language models (LLMs) comparable to ChatGPT are very powerful tools in sifting through content beyond the capabilities of experts, and even groups of experts, and extracting knowledge. Nevertheless, we’ll need to handle the issue of information bias before we will leverage LLMs and other artificial intelligence technologies. The body of data that LLMs train on, each medical and beyond, is dominated by content and research from well-funded institutions in high-income countries. It isn’t representative of many of the world.

We now have also learned that even mechanistic models of health and disease could also be biased. These inputs are fed to encoders and transformers which can be oblivious to those biases. Ground truths in medicine are repeatedly shifting, and currently, there is no such thing as a strategy to determine when ground truths have drifted. LLMs don’t evaluate the standard and the bias of the content they’re being trained on. Neither do they supply the extent of uncertainty around their output. But the right mustn’t be the enemy of the nice. There’s tremendous opportunity to enhance the best way health care providers currently make clinical decisions, which we all know are tainted with unconscious bias. I even have little doubt AI will deliver its promise once we have now optimized the info input.


Please enter your comment!
Please enter your name here