Creating AI that matters

-

With regards to artificial intelligence, MIT and IBM were there in the beginning: laying foundational work and creating a number of the first programs — AI predecessors — and theorizing how machine “intelligence” might come to be.

Today, collaborations just like the MIT-IBM Watson AI Lab, which launched eight years ago, are continuing to deliver expertise for the promise of tomorrow’s AI technology. That is critical for industries and the labor force that stand to learn, particularly within the short term: from $3-4 trillion of forecast global economic advantages and 80 percent productivity gains for knowledge employees and inventive tasks, to significant incorporations of generative AI into business processes (80 percent) and software applications (70 percent) in the subsequent three years.

While industry has seen a boom in notable models, chiefly prior to now 12 months, academia continues to drive the innovation, contributing many of the highly cited research. On the MIT-IBM Watson AI Lab, success takes the shape of 54 patent disclosures, an excess of 128,000 citations with an h-index of 162, and greater than 50 industry-driven use cases. A few of the lab’s many achievements include improved stent placement with AI imaging techniques, slashing computational overhead, shrinking models while maintaining performance, and modeling of interatomic potential for silicate chemistry.

“The lab is uniquely positioned to discover the ‘right’ problems to unravel, setting us other than other entities,” says Aude Oliva, lab MIT director and director of strategic industry engagement within the MIT Schwarzman College of Computing. “Further, the experience our students gain from working on these challenges for enterprise AI translates to their competitiveness within the job market and the promotion of a competitive industry.”

“The MIT-IBM Watson AI Lab has had tremendous impact by bringing together a wealthy set of collaborations between IBM and MIT’s researchers and students,” says Provost Anantha Chandrakasan, who’s the lab’s MIT co-chair and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “By supporting cross-cutting research on the intersection of AI and lots of other disciplines, the lab is advancing foundational work and accelerating the event of transformative solutions for our nation and the world.”

Long-horizon work

As AI continues to garner interest, many organizations struggle to channel the technology into meaningful outcomes. A 2024 Gartner study finds that, “no less than 30% of generative AI projects can be abandoned after proof of concept by the top of 2025,” demonstrating ambition and widespread hunger for AI, but a lack of expertise for how one can develop and apply it to create immediate value.

Here, the lab shines, bridging research and deployment. The vast majority of the lab’s current-year research portfolio is aligned to make use of and develop latest features, capacities, or products for IBM, the lab’s corporate members, or real-world applications. The last of those comprise large language models, AI hardware, and foundation models, including multi-modal, bio-medical, and geo-spatial ones. Inquiry-driven students and interns are invaluable on this pursuit, offering enthusiasm and latest perspectives while accumulating domain knowledge to assist derive and engineer advancements in the sector, in addition to opening up latest frontiers for exploration with AI as a tool.

Findings from the AAAI 2025 Presidential panel on the Way forward for AI Research support the necessity for contributions from academia-industry collaborations just like the lab within the AI arena: “Academics have a job to play in providing independent advice and interpretations of those results [from industry] and their consequences. The private sector focuses more on the short term, and universities and society more on a longer-term perspective.”

Bringing these strengths together, together with the push for open sourcing and open science, can spark innovation that neither could achieve alone. History shows that embracing these principles, and sharing code and making research accessible, has long-term advantages for each the sector and society. Consistent with IBM and MIT’s missions, the lab contributes technologies, findings, governance, and standards to the general public sphere through this collaboration, thereby enhancing transparency, accelerating reproducibility, and ensuring trustworthy advances.

The lab was created to merge MIT’s deep research expertise with IBM’s industrial R&D capability, aiming for breakthroughs in core AI methods and hardware, in addition to latest applications in areas like health care, chemistry, finance, cybersecurity, and robust planning and decision-making for business.

Greater is not all the time higher

Today, large foundation models are giving solution to smaller, more task-specific models yielding higher performance. Contributions from lab members like Song Han, associate professor within the MIT Department of Electrical Engineering and Computer Science (EECS), and IBM Research’s Chuang Gan help make this possible, through work comparable to once-for-all and AWQ. Innovations comparable to these improve efficiency with higher architectures, algorithm shrinking, and activation-aware weight quantization, letting models like language processing run on edge devices at faster speeds and reduced latency.

Consequently, foundation, vision, multimodal, and huge language models have seen advantages, allowing for the lab research groups of Oliva, MIT EECS Associate Professor Yoon Kim, and IBM Research members Rameswar Panda, Yang Zhang, and Rogerio Feris to construct on the work. This includes techniques to imbue models with external knowledge and the event of linear attention transformer methods for higher throughput, in comparison with other state-of-the-art systems. 

Understanding and reasoning in vision and multimodal systems has also seen a boon. Works like “Task2Sim” and “AdaFuse” exhibit improved vision model performance if pre-training takes place on synthetic data, and the way video motion recognition might be boosted by fusing channels from past and current feature maps.

As a part of a commitment to leaner AI, the lab teams of Gregory Wornell, the MIT EECS Sumitomo Electric Industries Professor in Engineering, IBM Research’s Chuang Gan, and David Cox, VP for foundational AI at IBM Research and the lab’s IBM director, have shown that model adaptability and data efficiency can go hand in hand. Two approaches, EvoScale and Chain-of-Motion-Thought reasoning (COAT), enable language models to profit from limited data and computation by improving on prior generation attempts through structured iteration, narrowing in on a greater response. COAT uses a meta-action framework and reinforcement learning to tackle reasoning-intensive tasks via self-correction, while EvoScale brings the same philosophy to code generation, evolving high-quality candidate solutions. These techniques help to enable resource-conscious, targeted, real-world deployment.

“The impact of MIT-IBM research on our large language model development efforts can’t be overstated,” says Cox. “We’re seeing that smaller, more specialized models and tools are having an outsized impact, especially after they are combined. Innovations from the MIT-IBM Watson AI Lab help shape these technical directions and influence the strategy we’re taking out there through platforms like watsonx.”

For instance, quite a few lab projects have contributed features, capabilities, and uses to IBM’s Granite Vision, which provides impressive computer vision designed for document understanding, despite its compact size. This comes at a time when there’s a growing need for extraction, interpretation, and trustworthy summarization of data and data contained in long formats for enterprise purposes.

Other achievements that stretch beyond direct research on AI and across disciplines should not only useful, but obligatory for advancing the technology and lifting up society, concludes the 2025 AAAI panel.

Work from the lab’s Caroline Uhler and Devavrat Shah — each Andrew (1956) and Erna Viterbi Professors in EECS and the Institute for Data, Systems, and Society (IDSS) — together with IBM Research’s Kristjan Greenewald, transcends specializations. They’re developing causal discovery methods to uncover how interventions affect outcomes, and discover which of them achieve desired results. The studies include developing a framework that may each elucidate how “treatments” for various sub-populations may play out, like on an ecommerce platform or mobility restrictions on morbidity outcomes. Findings from this body of labor could influence the fields of promoting and medicine to education and risk management.

“Advances in AI and other areas of computing are influencing how people formulate and tackle challenges in nearly every discipline. On the MIT-IBM Watson AI Lab, researchers recognize this cross-cutting nature of their work and its impact, interrogating problems from multiple viewpoints and bringing real-world problems from industry, with the intention to develop novel solutions,” says Dan Huttenlocher, MIT lab co-chair, dean of the MIT Schwarzman College of Computing, and the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science.

A big piece of what makes this research ecosystem thrive is the regular influx of student talent and their contributions through MIT’s Undergraduate Research Opportunities Program (UROP), MIT EECS 6A Program, and the brand new MIT-IBM Watson AI Lab Internship Program. Altogether, greater than 70 young researchers haven’t only accelerated their technical skill development, but, through guidance and support by the lab’s mentors, gained knowledge in AI domains to turn into emerging practitioners themselves. For this reason the lab continually seeks to discover promising students in any respect stages of their exploration of AI’s potential.

“With a view to unlock the total economic and societal potential of AI, we’d like to foster ‘useful and efficient intelligence,’” says Sriram Raghavan, IBM Research VP for AI and IBM chair of the lab. “To translate AI promise into progress, it’s crucial that we proceed to concentrate on innovations to develop efficient, optimized, and fit-for-purpose models that may easily be adapted to specific domains and use cases. Academic-industry collaborations, comparable to the MIT-IBM Watson AI Lab, help drive the breakthroughs that make this possible.”

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x