As more connected devices demand an increasing amount of bandwidth for tasks like teleworking and cloud computing, it should develop into extremely difficult to administer the finite amount of wireless spectrum available for all users to share.
Engineers are employing artificial intelligence to dynamically manage the available wireless spectrum, with an eye fixed toward reducing latency and boosting performance. But most AI methods for classifying and processing wireless signals are power-hungry and may’t operate in real-time.
Now, MIT researchers have developed a novel AI hardware accelerator that’s specifically designed for wireless signal processing. Their optical processor performs machine-learning computations on the speed of sunshine, classifying wireless signals in a matter of nanoseconds.
The photonic chip is about 100 times faster than the very best digital alternative, while converging to about 95 percent accuracy in signal classification. The brand new hardware accelerator can be scalable and versatile, so it could possibly be used for quite a lot of high-performance computing applications. At the identical time, it’s smaller, lighter, cheaper, and more energy-efficient than digital AI hardware accelerators.
The device could possibly be especially useful in future 6G wireless applications, resembling cognitive radios that optimize data rates by adapting wireless modulation formats to the changing wireless environment.
By enabling an edge device to perform deep-learning computations in real-time, this latest hardware accelerator could provide dramatic speedups in lots of applications beyond signal processing. As an example, it could help autonomous vehicles make split-second reactions to environmental changes or enable smart pacemakers to constantly monitor the health of a patient’s heart.
“There are various applications that may be enabled by edge devices which can be able to analyzing wireless signals. What we’ve presented in our paper could open up many possibilities for real-time and reliable AI inference. This work is the start of something that could possibly be quite impactful,” says Dirk Englund, a professor within the MIT Department of Electrical Engineering and Computer Science, principal investigator within the Quantum Photonics and Artificial Intelligence Group and the Research Laboratory of Electronics (RLE), and senior writer of the paper.
He’s joined on the paper by lead writer Ronald Davis III PhD ’24; Zaijun Chen, a former MIT postdoc who’s now an assistant professor on the University of Southern California; and Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research. The research appears today in .
Light-speed processing
State-of-the-art digital AI accelerators for wireless signal processing convert the signal into a picture and run it through a deep-learning model to categorise it. While this approach is extremely accurate, the computationally intensive nature of deep neural networks makes it infeasible for a lot of time-sensitive applications.
Optical systems can speed up deep neural networks by encoding and processing data using light, which can be less energy intensive than digital computing. But researchers have struggled to maximise the performance of general-purpose optical neural networks when used for signal processing, while ensuring the optical device is scalable.
By developing an optical neural network architecture specifically for signal processing, which they call a multiplicative analog frequency transform optical neural network (MAFT-ONN), the researchers tackled that problem head-on.
The MAFT-ONN addresses the issue of scalability by encoding all signal data and performing all machine-learning operations inside what’s generally known as the frequency domain — before the wireless signals are digitized.
The researchers designed their optical neural network to perform all linear and nonlinear operations in-line. Each sorts of operations are required for deep learning.
Because of this revolutionary design, they only need one MAFT-ONN device per layer for the complete optical neural network, versus other methods that require one device for every individual computational unit, or “neuron.”
“We will fit 10,000 neurons onto a single device and compute the essential multiplications in a single shot,” Davis says.
The researchers accomplish this using a method called photoelectric multiplication, which dramatically boosts efficiency. It also allows them to create an optical neural network that might be readily scaled up with additional layers without requiring extra overhead.
Leads to nanoseconds
MAFT-ONN takes a wireless signal as input, processes the signal data, and passes the knowledge along for later operations the sting device performs. As an example, by classifying a signal’s modulation, MAFT-ONN would enable a tool to routinely infer the variety of signal to extract the information it carries.
One in every of the most important challenges the researchers faced when designing MAFT-ONN was determining methods to map the machine-learning computations to the optical hardware.
“We couldn’t just take a traditional machine-learning framework off the shelf and use it. We needed to customize it to suit the hardware and work out methods to exploit the physics so it will perform the computations we wanted it to,” Davis says.
After they tested their architecture on signal classification in simulations, the optical neural network achieved 85 percent accuracy in a single shot, which may quickly converge to greater than 99 percent accuracy using multiple measurements. MAFT-ONN only required about 120 nanoseconds to perform entire process.
“The longer you measure, the upper accuracy you’ll get. Because MAFT-ONN computes inferences in nanoseconds, you don’t lose much speed to achieve more accuracy,” Davis adds.
While state-of-the-art digital radio frequency devices can perform machine-learning inference in a microseconds, optics can do it in nanoseconds and even picoseconds.
Moving forward, the researchers need to employ what are generally known as multiplexing schemes in order that they could perform more computations and scale up the MAFT-ONN. Additionally they need to extend their work into more complex deep learning architectures that might run transformer models or LLMs.
This work was funded, partially, by the U.S. Army Research Laboratory, the U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and the National Science Foundation.