When sound waves reach the inner ear, neurons there pick up the vibrations and alert the brain. Encoded of their signals is a wealth of data that permits us to follow conversations, recognize familiar voices, appreciate music, and quickly locate a ringing phone or crying baby.
Neurons send signals by emitting spikes — temporary changes in voltage that propagate along nerve fibers, also often known as motion potentials. Remarkably, auditory neurons can fire lots of of spikes per second, and time their spikes with exquisite precision to match the oscillations of incoming sound waves.
With powerful recent models of human hearing, scientists at MIT’s McGovern Institute for Brain Research have determined that this precise timing is important for a few of a very powerful ways we make sense of auditory information, including recognizing voices and localizing sounds.
The open-access findings, reported Dec. 4 within the journal , show how machine learning will help neuroscientists understand how the brain uses auditory information in the true world. MIT professor and McGovern investigator Josh McDermott, who led the research, explains that his team’s models better-equip researchers to review the implications of various kinds of hearing impairment and devise more practical interventions.
Science of sound
The nervous system’s auditory signals are timed so precisely, researchers have long suspected that timing is essential to our perception of sound. Sound waves oscillate at rates that determine their pitch: Low-pitched sounds travel in slow waves, whereas high-pitched sound waves oscillate more often. The auditory nerve that relays information from sound-detecting hair cells within the ear to the brain generates electrical spikes that correspond to the frequency of those oscillations. “The motion potentials in an auditory nerve get fired at very particular deadlines relative to the peaks within the stimulus waveform,” explains McDermott, who can be associate head of the MIT Department of Brain and Cognitive Sciences.
This relationship, often known as phase-locking, requires neurons to time their spikes with sub-millisecond precision. But scientists haven’t really known how informative these temporal patterns are to the brain. Beyond being scientifically intriguing, McDermott says, the query has essential clinical implications: “If you need to design a prosthesis that gives electrical signals to the brain to breed the function of the ear, it’s arguably pretty essential to know what kinds of data in the conventional ear actually matter,” he says.
This has been difficult to review experimentally; animal models can’t offer much insight into how the human brain extracts structure in language or music, and the auditory nerve is inaccessible for study in humans. So McDermott and graduate student Mark Saddler PhD ’24 turned to artificial neural networks.
Artificial hearing
Neuroscientists have long used computational models to explore how sensory information could be decoded by the brain, but until recent advances in computing power and machine learning methods, these models were limited to simulating easy tasks. “One in every of the issues with these prior models is that they’re often way too good,” says Saddler, who’s now on the Technical University of Denmark. For instance, a computational model tasked with identifying the upper pitch in a pair of easy tones is prone to perform higher than people who find themselves asked to do the identical thing. “This will not be the sort of task that we do each day in hearing,” Saddler points out. “The brain will not be optimized to resolve this very artificial task.” This mismatch limited the insights that could possibly be drawn from this prior generation of models.
To raised understand the brain, Saddler and McDermott desired to challenge a hearing model to do things that individuals use their hearing for in the true world, like recognizing words and voices. That meant developing a synthetic neural network to simulate the parts of the brain that receive input from the ear. The network was given input from some 32,000 simulated sound-detecting sensory neurons after which optimized for various real-world tasks.
The researchers showed that their model replicated human hearing well — higher than any previous model of auditory behavior, McDermott says. In a single test, the substitute neural network was asked to acknowledge words and voices inside dozens of kinds of background noise, from the hum of an airplane cabin to enthusiastic applause. Under every condition, the model performed very similarly to humans.
When the team degraded the timing of the spikes within the simulated ear, nonetheless, their model could now not match humans’ ability to acknowledge voices or discover the locations of sounds. For instance, while McDermott’s team had previously shown that individuals use pitch to assist them discover people’s voices, the model revealed that that this ability is lost without precisely timed signals. “You would like quite precise spike timing with the intention to each account for human behavior and to perform well on the duty,” Saddler says. That implies that the brain uses precisely timed auditory signals because they aid these practical points of hearing.
The team’s findings display how artificial neural networks will help neuroscientists understand how the data extracted by the ear influences our perception of the world, each when hearing is undamaged and when it’s impaired. “The power to link patterns of firing within the auditory nerve with behavior opens numerous doors,” McDermott says.
“Now that now we have these models that link neural responses within the ear to auditory behavior, we will ask, ‘If we simulate various kinds of hearing loss, what effect is that going to have on our auditory abilities?’” McDermott says. “That may help us higher diagnose hearing loss, and we expect there are also extensions of that to assist us design higher hearing aids or cochlear implants.” For instance, he says, “The cochlear implant is proscribed in various ways — it might do some things and never others. What’s the most effective approach to arrange that cochlear implant to enable you to mediate behaviors? You’ll be able to, in principle, use the models to let you know that.”