So much has modified within the 15 years since Kaiming He was a PhD student.
“If you end up in your PhD stage, there may be a high wall between different disciplines and subjects, and there was even a high wall inside computer science,” He says. “The guy sitting next to me may very well be doing things that I completely couldn’t understand.”
Within the seven months since he joined the MIT Schwarzman College of Computing because the Douglas Ross (1954) Profession Development Professor of Software Technology within the Department of Electrical Engineering and Computer Science, He says he’s experiencing something that in his opinion is “very rare in human scientific history” — a lowering of the partitions that expands across different scientific disciplines.
“There is no such thing as a way I could ever understand high-energy physics, chemistry, or the frontier of biology research, but now we’re seeing something that may also help us to interrupt these partitions,” He says, “and that’s the creation of a typical language that has been present in AI.”
Constructing the AI bridge
Based on He, this shift began in 2012 within the wake of the “deep learning revolution,” a degree when it was realized that this set of machine-learning methods based on neural networks was so powerful that it may very well be put to greater use.
“At this point, computer vision — helping computers to see and perceive the world as in the event that they are human beings — began growing very rapidly, because because it seems you’ll be able to apply this same methodology to many various problems and many various areas,” says He. “So the pc vision community quickly grew really large because these different subtopics were now in a position to speak a typical language and share a typical set of tools.”
From there, He says the trend began to expand to other areas of computer science, including natural language processing, speech recognition, and robotics, creating the inspiration for ChatGPT and other progress toward artificial general intelligence (AGI).
“All of this has happened over the past decade, leading us to a brand new emerging trend that I’m really looking forward to, and that’s watching AI methodology propagate other scientific disciplines,” says He.
One of the crucial famous examples, He says, is AlphaFold, a man-made intelligence program developed by Google DeepMind, which performs predictions of protein structure.
“It’s a really different scientific discipline, a really different problem, but persons are also using the identical set of AI tools, the identical methodology to resolve these problems,” He says, “and I feel that’s only the start.”
The long run of AI in science
Since coming to MIT in February 2024, He says he has talked to professors in almost every department. Some days he finds himself in conversation with two or more professors from very different backgrounds.
“I definitely don’t fully understand their area of research, but they may just introduce some context after which we will begin to discuss deep learning, machine learning, [and] neural network models of their problems,” He says. “On this sense, these AI tools are like a typical language between these scientific areas: the machine learning tools ‘translate’ their terminology and ideas into terms that I can understand, after which I can learn their problems and share my experience, and sometimes propose solutions or opportunities for them to explore.”
Expanding to different scientific disciplines has significant potential, from using video evaluation to predict weather and climate trends to expediting the research cycle and reducing costs in relation to recent drug discovery.
While AI tools provide a transparent profit to the work of He’s scientist colleagues, He also notes the reciprocal effect they’ll have, and have had, on the creation and advancement of AI.
“Scientists provide recent problems and challenges that help us proceed to evolve these tools,” says He. “But additionally it is vital to do not forget that lots of today’s AI tools stem from earlier scientific areas — for instance, artificial neural networks were inspired by biological observations; diffusion models for image generation were motivated from the physics term.”
“Science and AI aren’t isolated subjects. We now have been approaching the identical goal from different perspectives, and now we’re getting together.”
And what higher place for them to return together than MIT.
“It is just not surprising that MIT can see this variation sooner than many other places,” He says. “[The MIT Schwarzman College of Computing] created an environment that connects different people and lets them sit together, talk together, work together, exchange their ideas, while speaking the identical language — and I’m seeing this begin to occur.”
By way of when the partitions will fully lower, He notes that it is a long-term investment that won’t occur overnight.
“Many years ago, computers were considered high tech and also you needed specific knowledge to grasp them, but now everyone seems to be using a pc,” He says. “I expect in 10 or more years, everyone will likely be using some form of AI ultimately for his or her research — it’s just their basic tools, their basic language, and so they can use AI to resolve their problems.”