Recent AI tool generates realistic satellite images of future flooding

-

Visualizing the potential impacts of a hurricane on people’s homes before it hits may help residents prepare and judge whether to evacuate.

MIT scientists have developed a technique that generates satellite imagery from the long run to depict how a region would take care of a possible flooding event. The strategy combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is more likely to occur given the strength of an oncoming storm.

As a test case, the team applied the strategy to Houston and generated satellite images depicting what certain locations around the town would appear to be after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the identical regions after Harvey hit. Additionally they compared AI-generated images that didn’t include a physics-based flood model.

The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding isn’t physically possible.

The team’s method is a proof-of-concept, meant to display a case wherein generative AI models can generate realistic, trustworthy content when paired with a physics-based model. With a purpose to apply the strategy to other regions to depict flooding from future storms, it’ll must be trained on many more satellite images to learn the way flooding would look in other regions.

“The concept is: In the future, we could use this before a hurricane, where it provides a further visualization layer for the general public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “Certainly one of the most important challenges is encouraging people to evacuate after they are in danger. Possibly this could possibly be one other visualization to assist increase that readiness.”

As an example the potential of the brand new method, which they’ve dubbed the “Earth Intelligence Engine,” the team has made it available as a web-based resource for others to try.

The researchers report their results today within the journal . The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; together with collaborators from multiple institutions.

Generative adversarial images

The brand new study is an extension of the team’s efforts to use generative AI tools to visualise future climate scenarios.

“Providing a hyper-local perspective of climate appears to be probably the most effective method to communicate our scientific results,” says Newman, the study’s senior writer. “People relate to their very own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”

For this study, the authors use a conditional generative adversarial network, or GAN, a kind of machine learning method that may generate realistic images using two competing, or “adversarial,” neural networks. The primary “generator” network is trained on pairs of real data, resembling satellite images before and after a hurricane. The second “discriminator” network is then trained to tell apart between the true satellite imagery and the one synthesized by the primary network.

Each network routinely improves its performance based on feedback from the opposite network. The concept, then, is that such an adversarial push and pull should ultimately produce synthetic images which can be indistinguishable from the true thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.

“Hallucinations can mislead viewers,” says Lütjens, who began to wonder if such hallucinations could possibly be avoided, such that generative AI tools could be trusted to assist inform people, particularly in risk-sensitive scenarios. “We were pondering: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so vital?”

Flood hallucinations

Of their latest work, the researchers considered a risk-sensitive scenario wherein generative AI is tasked with creating satellite images of future flooding that could possibly be trustworthy enough to tell decisions of methods to prepare and potentially evacuate people out of harm’s way.

Typically, policymakers can get an idea of where flooding might occur based on visualizations in the shape of color-coded maps. These maps are the ultimate product of a pipeline of physical models that typically begins with a hurricane track model, which then feeds right into a wind model that simulates the pattern and strength of winds over an area region. That is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visible, color-coded map of flood elevations over a specific region.

“The query is: Can visualizations of satellite imagery add one other level to this, that may be a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says.

The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they omitted Houston before and after Hurricane Harvey. After they tasked the generator to supply latest flood images of the identical regions, they found that the pictures resembled typical satellite imagery, but a better look revealed hallucinations in some images, in the shape of floods where flooding shouldn’t be possible (for example, in locations at higher elevation).

To scale back hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that includes real, physical parameters and phenomena, resembling an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the identical flood extent, pixel by pixel, as forecasted by the flood model.

“We show a tangible method to mix machine learning with physics for a use case that’s risk-sensitive, which requires us to research the complexity of Earth’s systems and project future actions and possible scenarios to maintain people out of harm’s way,” Newman says. “We will’t wait to get our generative AI tools into the hands of decision-makers at the local people level, which could make a major difference and maybe save lives.”

The research was supported, partially, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud.

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x