Imagine you are in an airplane with two pilots, one human and one computer. Each have their “hands” on the controllers, but they’re at all times looking for various things. In the event that they’re each listening to the identical thing, the human gets to steer. But when the human gets distracted or misses something, the pc quickly takes over.
Meet the Air-Guardian, a system developed by researchers on the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). As modern pilots grapple with an onslaught of knowledge from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive copilot; a partnership between human and machine, rooted in understanding attention.
But how does it determine attention, exactly? For humans, it uses eye-tracking, and for the neural system, it relies on something called “saliency maps,” which pinpoint where attention is directed. The maps function visual guides highlighting key regions inside a picture, aiding in grasping and deciphering the behavior of intricate algorithms. Air-Guardian identifies early signs of potential risks through these attention markers, as a substitute of only intervening during safety breaches like traditional autopilot systems.
The broader implications of this method reach beyond aviation. Similar cooperative control mechanisms could someday be utilized in cars, drones, and a wider spectrum of robotics.
“An exciting feature of our method is its differentiability,” says MIT CSAIL postdoc Lianhao Yin, a lead writer on a latest paper about Air-Guardian. “Our cooperative layer and your entire end-to-end process will be trained. We specifically selected the causal continuous-depth neural network model due to its dynamic features in mapping attention. One other unique aspect is adaptability. The Air-Guardian system is not rigid; it could possibly be adjusted based on the situation’s demands, ensuring a balanced partnership between human and machine.”
In field tests, each the pilot and the system made decisions based on the identical raw images when navigating to the goal waypoint. Air-Guardian’s success was gauged based on the cumulative rewards earned during flight and shorter path to the waypoint. The guardian reduced the danger level of flights and increased the success rate of navigating to focus on points.
“This method represents the revolutionary approach of human-centric AI-enabled aviation,” adds Ramin Hasani, MIT CSAIL research affiliate and inventor of liquid neural networks. “Our use of liquid neural networks provides a dynamic, adaptive approach, ensuring that the AI doesn’t merely replace human judgment but complements it, resulting in enhanced safety and collaboration within the skies.”
The true strength of Air-Guardian is its foundational technology. Using an optimization-based cooperative layer using visual attention from humans and machine, and liquid closed-form continuous-time neural networks (CfC) known for its prowess in deciphering cause-and-effect relationships, it analyzes incoming images for vital information. Complementing that is the VisualBackProp algorithm, which identifies the system’s focal points inside a picture, ensuring clear understanding of its attention maps.
For future mass adoption, there is a must refine the human-machine interface. Feedback suggests an indicator, like a bar, may be more intuitive to suggest when the guardian system takes control.
Air-Guardian heralds a latest age of safer skies, offering a reliable safety net for those moments when human attention wavers.
“The Air-Guardian system highlights the synergy between human expertise and machine learning, furthering the target of using machine learning to enhance pilots in difficult scenarios and reduce operational errors,” says Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, director of CSAIL, and senior writer on the paper.
“One of the interesting outcomes of using a visible attention metric on this work is the potential for allowing earlier interventions and greater interpretability by human pilots,” says Stephanie Gil, assistant professor of computer science at Harvard University, who was not involved within the work. “This showcases an important example of how AI will be used to work with a human, lowering the barrier for achieving trust through the use of natural communication mechanisms between the human and the AI system.”
This research was partially funded by the U.S. Air Force (USAF) Research Laboratory, the USAF Artificial Intelligence Accelerator, the Boeing Co., and the Office of Naval Research. The findings don’t necessarily reflect the views of the U.S. government or the USAF.