When a parent is teaching their young child to relate to the world, they teach through associations and the identification of patterns. Take the letter S, for instance. Parents show their child enough examples of the letter and before long, they’ll have the option to discover other examples in contexts where guidance isn’t lively; school, a book, a billboard.
Much of the ever-emerging artificial intelligence (AI) technology was taught the identical way. Researchers fed the system correct examples of something they wanted it to acknowledge, and like a young child, AI began recognizing patterns and extrapolating such knowledge to contexts it had never before experienced, forming its own “neural network” for categorization. Like human intelligence, nevertheless, experts lost track of the inputs that informed AI’s decision making.
The “black box problem” of AI thus emerges because the incontrovertible fact that we don’t fully understand how or why an AI system makes connections, nor the variables that play into its decisions. This issue is very relevant when in search of to enhance systems’ trustworthiness and safety and establishing the governance of AI adoption.
From an AI-powered vehicle that fails to brake in time and hurts pedestrians, to AI-reliant health tech devices that assist doctors in diagnosing patients, and biases exhibited by AI hiring screening processes, the complexity behind these systems has led to the rise of a brand new field of study: the physics of AI, which seeks to further establish AI as a tools for humans to realize higher understanding.
Now, a brand new independent study group will address these challenges by merging the fields of physics, psychology, philosophy and neuroscience in an interdisciplinary exploration of AI’s mysteries.
The newly-announced Physics of Artificial Intelligence Group is a spin-off of NTT Research’s Physics & Informatics (PHI) Lab, and was unveiled at NTT’s Upgrade 2025 conference in San Francisco, California last week. It’ll proceed to advance the Physics of Artificial Intelligence approach to understanding AI, which the team has been investigating for the past five years.
Dr. Hidenori Tanaka, who has a PhD in Applied Physics & Computer Science and Engineering from Harvard University, will lead the brand new research group, constructing on his previous experience in NTT’s Intelligent Systems Group and CBS-NTT’s AI Research program within the physics of intelligence at Harvard.
“As a physicist I’m excited in regards to the subject of intelligence because, mathematically, how will you consider the concept of creativity? How will you even take into consideration kindness? These concepts would have remained abstract if it weren’t for AI. It’s easy to take a position, saying ‘that is my definition of kindness,’ which isn’t mathematically meaningful, but now with AI, it’s practically vital because if we have the desire to make AI kind, we’ve got to inform it within the language of mathematics what kindness is, for instance,” Dr. Tanaka told me last week on the sidelines of the Upgrade conference.
Early on of their research, the PHI Lab recognized the importance of understanding the “black box” nature of AI and machine learning to develop recent systems with improved energy efficiency for computation. AI’s advancement within the last half decade, nevertheless, has evoked increasingly vital safety and trustworthiness considerations, which have thus turn out to be critical to industry applications and governance decisions on AI adoption.
Through the brand new research group, NTT Research will address the similarities between biological and artificial intelligences, thus hoping to unravel the complexities of AI mechanisms and constructing more harmonious fusion of human-AI collaboration.
Although novel in its integration of AI, this approach isn’t recent. Physicists have sought to disclose the precise details of technological and human relationships for hundreds of years, from Galileo Galilei’s studies on how objects move and his contribution to mechanics, to how the steam engine informed understandings of thermodynamics in the course of the Industrial Revolution. Within the twenty first century, nevertheless, scientists are in search of to grasp how AI works when it comes to being trained, accumulating knowledge and making decisions in order that, in the longer term, more cohesive, protected and trustworthy AI technologies could be designed.
“AI is a neuronetwork, the way in which it’s structured may be very much like how a human brain works; neurons connected by synapses, that are all represented by numbers inside a pc. After which that’s where we consider that there could be physics… Physics is about taking anything from the universe, formulating mathematical hypotheses about their inner workings, and testing them,” said Dr. Hanaka.
The brand new group will proceed to collaborate with the Harvard University Center for Brain Science (CBS), and plans to collaborate with Stanford University Associate Professor Suya Ganguli, with whom Dr. Tanaka has co-authored several papers.
Nevertheless, Dr. Tanaka stresses that a natural-science and cross-industry approach might be fundamental. In 2017, when he was a PhD candidate at Harvard, the researcher realized that he desired to do greater than traditional physics, and follow within the footsteps of his predecessors, from Galilei to Newton and Einstein, to open up recent conceptual worlds in physics.
“Currently, AI is the one topic that I can consult with everyone about. As a researcher, it’s great because everyone seems to be all the time as much as talking about AI, and I also learn from every conversation because I realize how people see and use AI otherwise, even beyond academic contexts. I see NTT’s mission as being the catalyst to spark these conversations, regardless of individuals’s backgrounds, because we learn from every interaction,” Dr. Tanaka concluded.