Advancing urban tree monitoring with AI-powered digital twins

-

The Irish philosopher George Berkely, best known for his theory of immaterialism, once famously mused, “If a tree falls in a forest and nobody is around to listen to it, does it make a sound?”

What about AI-generated trees? They probably wouldn’t make a sound, but they shall be critical nonetheless for applications comparable to adaptation of urban flora to climate change. To that end, the novel “Tree-D Fusion” system developed by researchers on the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google, and Purdue University merges AI and tree-growth models with Google’s Auto Arborist data to create accurate 3D models of existing urban trees. The project has produced the first-ever large-scale database of 600,000 environmentally aware, simulation-ready tree models across North America.

“We’re bridging many years of forestry science with modern AI capabilities,” says Sara Beery, MIT electrical engineering and computer science (EECS) assistant professor, MIT CSAIL principal investigator, and a co-author on a brand new paper about Tree-D Fusion. “This enables us to not only discover trees in cities, but to predict how they’ll grow and impact their surroundings over time. We’re not ignoring the past 30 years of labor in understanding construct these 3D synthetic models; as a substitute, we’re using AI to make this existing knowledge more useful across a broader set of individual trees in cities around North America, and eventually the globe.”

Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but branches it forward by generating complete 3D models from single images. While earlier attempts at tree modeling were limited to specific neighborhoods, or struggled with accuracy at scale, Tree-D Fusion can create detailed models that include typically hidden features, comparable to the back side of trees that aren’t visible in street-view photos.

The technology’s practical applications extend far beyond mere remark. City planners could use Tree-D Fusion to someday peer into the longer term, anticipating where growing branches might tangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and air quality improvements. These predictive capabilities, the team says, could change urban forest management from reactive maintenance to proactive planning.

A tree grows in Brooklyn (and plenty of other places)

The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of every tree’s shape, then using traditional procedural models to simulate realistic branch and leaf patterns based on the tree’s genus. This combo helped the model predict how trees would grow under different environmental conditions and climate scenarios, comparable to different possible local temperatures and ranging access to groundwater.

Now, as cities worldwide grapple with rising temperatures, this research offers a brand new window into the longer term of urban forests. In a collaboration with MIT’s Senseable City Lab, the Purdue University and Google team is embarking on a worldwide study that re-imagines trees as living climate shields. Their digital modeling system captures the intricate dance of shade patterns throughout the seasons, revealing how strategic urban forestry could hopefully change sweltering city blocks into more naturally cooled neighborhoods.

“Each time a street mapping vehicle passes through a city now, we’re not only taking snapshots — we’re watching these urban forests evolve in real-time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, offering cities a strong lens to look at how environmental stresses shape tree health and growth patterns across their urban landscape.”

AI-based tree modeling has emerged as an ally in the search for environmental justice: By mapping urban tree cover in unprecedented detail, a sister project from the Google AI for Nature team has helped uncover disparities in green space access across different socioeconomic areas. “We’re not only studying urban forests — we’re attempting to cultivate more equity,” says Beery. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green canopies, the advantages branch out to all residents equally.

It’s a breeze

While Tree-D fusion marks some major “growth” in the sphere, trees could be uniquely difficult for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees are nature’s shape-shifters — swaying within the wind, interweaving branches with neighbors, and always changing their form as they grow. The Tree-D fusion models are “simulation-ready” in that they’ll estimate the form of the trees in the longer term, depending on the environmental conditions.

“What makes this work exciting is the way it pushes us to rethink fundamental assumptions in computer vision,” says Beery. “While 3D scene understanding techniques like photogrammetry or NeRF [neural radiance fields] excel at capturing static objects, trees demand latest approaches that may account for his or her dynamic nature, where even a delicate breeze can dramatically alter their structure from moment to moment.”

The team’s approach of making rough structural envelopes that approximate each tree’s form has proven remarkably effective, but certain issues remain unsolved. Perhaps probably the most vexing is the “entangled tree problem;” when neighboring trees grow into one another, their intertwined branches create a puzzle that no current AI system can fully unravel.

The scientists see their dataset as a springboard for future innovations in computer vision, and so they’re already exploring applications beyond street view imagery, seeking to extend their approach to platforms like iNaturalist and wildlife camera traps.

“This marks only the start for Tree-D Fusion,” says Jae Joong Lee, a Purdue University PhD student who developed, implemented and deployed the Tree-D-Fusion algorithm. “Along with my collaborators, I envision expanding the platform’s capabilities to a planetary scale. Our goal is to make use of AI-driven insights in service of natural ecosystems — supporting biodiversity, promoting global sustainability, and ultimately, benefiting the health of our entire planet.”

Beery and Lee’s co-authors are Jonathan Huang, Scaled Foundations head of AI (formerly of Google); and 4 others from Purdue University: PhD students Jae Joong Lee and Bosheng Li, Professor and Dean’s Chair of Distant Sensing Songlin Fei, Assistant Professor Raymond Yeh, and Professor and Associate Head of Computer Science Bedrich Benes. Their work relies on efforts supported by the US Department of Agriculture’s (USDA) Natural Resources Conservation Service and is directly supported by the USDA’s National Institute of Food and Agriculture. The researchers presented their findings on the European Conference on Computer Vision this month. 

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x