Home Artificial Intelligence Drones navigate unseen environments with liquid neural networks

Drones navigate unseen environments with liquid neural networks

0
Drones navigate unseen environments with liquid neural networks

Within the vast, expansive skies where birds once ruled supreme, a latest crop of aviators is chickening out. These pioneers of the air are usually not living creatures, but moderately a product of deliberate innovation: drones. But these aren’t your typical flying bots, humming around like mechanical bees. Slightly, they’re avian-inspired marvels that soar through the sky, guided by liquid neural networks to navigate ever-changing and unseen environments with precision and ease.

Inspired by the adaptable nature of organic brains, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced a way for robust flight navigation agents to master vision-based fly-to-target tasks in intricate, unfamiliar environments. The liquid neural networks, which might constantly adapt to latest data inputs, showed prowess in making reliable decisions in unknown domains like forests, urban landscapes, and environments with added noise, rotation, and occlusion. These adaptable models, which outperformed many state-of-the-art counterparts in navigation tasks, could enable potential real-world drone applications like search and rescue, delivery, and wildlife monitoring.

The researchers’ recent study, published today in , details how this latest breed of agents can adapt to significant distribution shifts, a long-standing challenge in the sector. The team’s latest class of machine-learning algorithms, nevertheless, captures the causal structure of tasks from high-dimensional, unstructured data, equivalent to pixel inputs from a drone-mounted camera. These networks can then extract crucial elements of a task (i.e., understand the duty at hand) and ignore irrelevant features, allowing acquired navigation skills to transfer targets seamlessly to latest environments.

Play video

Drones navigate unseen environments with liquid neural networks.

“We’re thrilled by the immense potential of our learning-based control approach for robots, because it lays the groundwork for solving problems that arise when training in a single environment and deploying in a very distinct environment without additional training,” says Daniela Rus, CSAIL director and the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT. “Our experiments reveal that we will effectively teach a drone to locate an object in a forest during summer, after which deploy the model in winter, with vastly different surroundings, and even in urban settings, with varied tasks equivalent to looking for and following. This adaptability is made possible by the causal underpinnings of our solutions. These flexible algorithms could at some point aid in decision-making based on data streams that change over time, equivalent to medical diagnosis and autonomous driving applications.”

A frightening challenge was on the forefront: Do machine-learning systems understand the duty they’re given from data when flying drones to an unlabeled object? And, would they give you the option to transfer their learned skill and task to latest environments with drastic changes in scenery, equivalent to flying from a forest to an urban landscape? What’s more, unlike the remarkable abilities of our biological brains, deep learning systems struggle with capturing causality, incessantly over-fitting their training data and failing to adapt to latest environments or changing conditions. This is particularly troubling for resource-limited embedded systems, like aerial drones, that must traverse varied environments and reply to obstacles instantaneously. 

The liquid networks, in contrast, offer promising preliminary indications of their capability to handle this important weakness in deep learning systems. The team’s system was first trained on data collected by a human pilot, to see how they transferred learned navigation skills to latest environments under drastic changes in scenery and conditions. Unlike traditional neural networks that only learn through the training phase, the liquid neural net’s parameters can change over time, making them not only interpretable, but more resilient to unexpected or noisy data. 

In a series of quadrotor closed-loop control experiments, the drones underwent range tests, stress tests, goal rotation and occlusion, mountaineering with adversaries, triangular loops between objects, and dynamic goal tracking. They tracked moving targets, and executed multi-step loops between objects in never-before-seen environments, surpassing performance of other cutting-edge counterparts. 

The team believes that the flexibility to learn from limited expert data and understand a given task while generalizing to latest environments could make autonomous drone deployment more efficient, cost-effective, and reliable. Liquid neural networks, they noted, could enable autonomous air mobility drones for use for environmental monitoring, package delivery, autonomous vehicles, and robotic assistants. 

“The experimental setup presented in our work tests the reasoning capabilities of varied deep learning systems in controlled and simple scenarios,” says MIT CSAIL Research Affiliate Ramin Hasani. “There remains to be a lot room left for future research and development on more complex reasoning challenges for AI systems in autonomous navigation applications, which must be tested before we will safely deploy them in our society.”

“Robust learning and performance in out-of-distribution tasks and scenarios are among the key problems that machine learning and autonomous robotic systems have to overcome to make further inroads in society-critical applications,” says Alessio Lomuscio, professor of AI safety within the Department of Computing at Imperial College London. “On this context, the performance of liquid neural networks, a novel brain-inspired paradigm developed by the authors at MIT, reported on this study is remarkable. If these results are confirmed in other experiments, the paradigm here developed will contribute to creating AI and robotic systems more reliable, robust, and efficient.”

Clearly, the sky is not any longer the limit, but moderately an enormous playground for the boundless possibilities of those airborne marvels. 

Hasani and PhD student Makram Chahine; Patrick Kao ’22, MEng ’22; and PhD student Aaron Ray SM ’21 wrote the paper with Ryan Shubert ’20, MEng ’22; MIT postdocs Mathias Lechner and Alexander Amini; and Rus.

This research was supported, partially, by Schmidt Futures, the U.S. Air Force Research Laboratory, the U.S. Air Force Artificial Intelligence Accelerator, and the Boeing Co.

LEAVE A REPLY

Please enter your comment!
Please enter your name here