Generative artificial intelligence is transforming the ways humans write, read, speak, think, empathize, and act inside and across languages and cultures. In health care, gaps in communication between patients and practitioners can worsen patient outcomes and stop improvements in practice and care. The Language/AI Incubator, made possible through funding from the MIT Human Insight Collaborative (MITHIC), offers a possible response to those challenges.
The project envisions a research community rooted within the humanities that may foster interdisciplinary collaboration across MIT to deepen understanding of generative AI’s impact on cross-linguistic and cross-cultural communication. The project’s concentrate on health care and communication seeks to construct bridges across socioeconomic, cultural, and linguistic strata.
The incubator is co-led by Leo Celi, a physician and the research director and senior research scientist with the Institute for Medical Engineering and Science (IMES), and Per Urlaub, professor of the practice in German and second language studies and director of MIT’s Global Languages program.
“The idea of health care delivery is the knowledge of health and disease,” Celi says. “We’re seeing poor outcomes despite massive investments because our knowledge system is broken.”
A probability collaboration
Urlaub and Celi met during a MITHIC launch event. Conversations through the event reception revealed a shared interest in exploring improvements in medical communication and practice with AI.
“We’re trying to include data science into health-care delivery,” Celi says. “We’ve been recruiting social scientists [at IMES] to assist advance our work, since the science we create isn’t neutral.”
Language is a non-neutral mediator in health care delivery, the team believes, and could be a boon or barrier to effective treatment. “Later, after we met, I joined one in every of his working groups whose focus was metaphors for pain: the language we use to explain it and its measurement,” Urlaub continues. “Considered one of the questions we considered was how effective communication can occur between doctors and patients.”
Technology, they argue, impacts casual communication, and its impact relies on each users and creators. As AI and enormous language models (LLMs) gain power and prominence, their use is broadening to incorporate fields like health care and wellness.
Rodrigo Gameiro, a physician and researcher with MIT’s Laboratory for Computational Physiology, is one other program participant. He notes that work on the laboratory centers responsible AI development and implementation. Designing systems that leverage AI effectively, particularly when considering challenges related to communicating across linguistic and cultural divides that may occur in health care, demands a nuanced approach.
“After we construct AI systems that interact with human language, we’re not only teaching machines the best way to process words; we’re teaching them to navigate the complex web of meaning embedded in language,” Gameiro says.
Language’s complexities can impact treatment and patient care. “Pain can only be communicated through metaphor,” Urlaub continues, “but metaphors don’t at all times match, linguistically and culturally.” Smiley faces and one-to-10 scales — pain measurement tools English-speaking medical professionals may use to evaluate their patients — may not travel well across racial, ethnic, cultural, and language boundaries.
“Science has to have a heart”
LLMs can potentially help scientists improve health care, although there are some systemic and pedagogical challenges to think about. Science can concentrate on outcomes to the exclusion of the people it’s meant to assist, Celi argues. “Science has to have a heart,” he says. “Measuring students’ effectiveness by counting the variety of papers they publish or patents they produce misses the purpose.”
The purpose, Urlaub says, is to analyze rigorously while concurrently acknowledging what we don’t know, citing what philosophers call Epistemic Humility. Knowledge, the investigators argue, is provisional, and at all times incomplete. Deeply held beliefs may require revision in light of latest evidence.
“Nobody’s mental view of the world is complete,” Celi says. “It’s worthwhile to create an environment during which persons are comfortable acknowledging their biases.”
“How will we share concerns between language educators and others all for AI?” Urlaub asks. “How will we discover and investigate the connection between medical professionals and language educators all for AI’s potential to assist within the elimination of gaps in communication between doctors and patients?”
Language, in Gameiro’s estimation, is greater than only a tool for communication. “It reflects culture, identity, and power dynamics,” he says. In situations where a patient may not be comfortable describing pain or discomfort due to physician’s position as an authority, or because their culture demands yielding to those perceived as authority figures, misunderstandings will be dangerous.
Changing the conversation
AI’s facility with language can assist medical professionals navigate these areas more rigorously, providing digital frameworks offering beneficial cultural and linguistic contexts during which patient and practitioner can depend on data-driven, research-supported tools to enhance dialogue. Institutions must reconsider how they educate medical professionals and invite the communities they serve into the conversation, the team says.
‘We’d like to ask ourselves what we truly want,” Celi says. “Why are we measuring what we’re measuring?” The biases we bring with us to those interactions — doctors, patients, their families, and their communities — remain barriers to improved care, Urlaub and Gameiro say.
“We would like to attach individuals who think in a different way, and make AI work for everybody,” Gameiro continues. “Technology without purpose is just exclusion at scale.”
“Collaborations like these can allow for deep processing and higher ideas,” Urlaub says.
Creating spaces where ideas about AI and health care can potentially change into actions is a key element of the project. The Language/AI Incubator hosted its first colloquium at MIT in May, which was led by Mena Ramos, a physician and the co-founder and CEO of the Global Ultrasound Institute.
The colloquium also featured presentations from Celi, in addition to Alfred Spector, a visiting scholar in MIT’s Department of Electrical Engineering and Computer Science, and Douglas Jones, a senior staff member within the MIT Lincoln Laboratory’s Human Language Technology Group. A second Language/AI Incubator colloquium is planned for August.
Greater integration between the social and hard sciences can potentially increase the likelihood of developing viable solutions and reducing biases. Allowing for shifts within the ways patients and doctors view the connection, while offering each shared ownership of the interaction, can assist improve outcomes. Facilitating these conversations with AI may speed the mixing of those perspectives.
“Community advocates have a voice and needs to be included in these conversations,” Celi says. “AI and statistical modeling can’t collect all the information needed to treat all of the individuals who need it.”
Community needs and improved educational opportunities and practices needs to be coupled with cross-disciplinary approaches to knowledge acquisition and transfer. The ways people see things are limited by their perceptions and other aspects. “Whose language are we modeling?” Gameiro asks about constructing LLMs. “Which varieties of speech are being included or excluded?” Since meaning and intent can shift across those contexts, it’s vital to recollect these when designing AI tools.
“AI is our probability to rewrite the foundations”
While there’s plenty of potential within the collaboration, there are serious challenges to beat, including establishing and scaling the technological means to enhance patient-provider communication with AI, extending opportunities for collaboration to marginalized and underserved communities, and reconsidering and revamping patient care.
However the team isn’t daunted.
Celi believes there are opportunities to handle the widening gap between people and practitioners while addressing gaps in health care. “Our intent is to reattach the string that’s been cut between society and science,” he says. “We are able to empower scientists and the general public to analyze the world together while also acknowledging the restrictions engendered in overcoming their biases.”
Gameiro is a passionate advocate for AI’s ability to vary all the things we learn about medicine. “I’m a medical doctor, and I don’t think I’m being hyperbolic after I say I consider AI is our probability to rewrite the foundations of what medicine can do and who we are able to reach,” he says.
“Education changes humans from objects to subjects,” Urlaub argues, describing the difference between disinterested observers and lively and engaged participants in the brand new care model he hopes to construct. “We’d like to higher understand technology’s impact on the lines between these states of being.”
Celi, Gameiro, and Urlaub each advocate for MITHIC-like spaces across health care, places where innovation and collaboration are allowed to occur without the sorts of arbitrary benchmarks institutions have previously used to mark success.
“AI will transform all these sectors,” Urlaub believes. “MITHIC is a generous framework that permits us to embrace uncertainty with flexibility.”
“We would like to employ our power to construct community amongst disparate audiences while admitting we don’t have all of the answers,” Celi says. “If we fail, it’s because we didn’t dream large enough about how a reimagined world could look.”