Home Artificial Intelligence Machine learning facilitates “turbulence tracking” in fusion reactors

Machine learning facilitates “turbulence tracking” in fusion reactors

1
Machine learning facilitates “turbulence tracking” in fusion reactors

Fusion, which guarantees practically unlimited, carbon-free energy using the identical processes that power the sun, is at the guts of a worldwide research effort that would help mitigate climate change.

A multidisciplinary team of researchers is now bringing tools and insights from machine learning to help this effort. Scientists from MIT and elsewhere have used computer-vision models to discover and track turbulent structures that appear under the conditions needed to facilitate fusion reactions.

Monitoring the formation and movements of those structures, called filaments or “blobs,” is vital for understanding the warmth and particle flows exiting from the reacting fuel, which ultimately determines the engineering requirements for the reactor partitions to fulfill those flows. Nonetheless, scientists typically study blobs using averaging techniques, which trade details of individual structures in favor of aggregate statistics. Individual blob information have to be tracked by marking them manually in video data. 

The researchers built an artificial video dataset of plasma turbulence to make this process more practical and efficient. They used it to coach 4 computer vision models, each of which identifies and tracks blobs. They trained the models to pinpoint blobs in the identical ways in which humans would.

When the researchers tested the trained models using real video clips, the models could discover blobs with high accuracy — greater than 80 percent in some cases. The models were also capable of effectively estimate the scale of blobs and the speeds at which they moved.

Because thousands and thousands of video frames are captured during only one fusion experiment, using machine-learning models to trace blobs could give scientists way more detailed information.

“Before, we could get a macroscopic picture of what these structures are doing on average. Now, we’ve a microscope and the computational power to research one event at a time. If we take a step back, what this reveals is the facility available from these machine-learning techniques, and ways to make use of these computational resources to make progress,” says Theodore Golfinopoulos, a research scientist on the MIT Plasma Science and Fusion Center and co-author of a paper detailing these approaches.

His fellow co-authors include lead writer Woonghee “Harry” Han, a physics PhD candidate; senior writer Iddo Drori, a visiting professor within the Computer Science and Artificial Intelligence Laboratory (CSAIL), faculty associate professor at Boston University, and adjunct at Columbia University; in addition to others from the MIT Plasma Science and Fusion Center, the MIT Department of Civil and Environmental Engineering, and the Swiss Federal Institute of Technology at Lausanne in Switzerland. The research appears today in .

Heating things up

For greater than 70 years, scientists have sought to make use of controlled thermonuclear fusion reactions to develop an energy source. To achieve the conditions crucial for a fusion response, fuel have to be heated to temperatures above 100 million degrees Celsius. (The core of the sun is about 15 million degrees Celsius.)

A typical method for holding this super-hot fuel, called plasma, is to make use of a tokamak. These devices utilize extremely powerful magnetic fields to carry the plasma in place and control the interaction between the exhaust heat from the plasma and the reactor partitions.

Nonetheless, blobs appear as if filaments falling out of the plasma on the very edge, between the plasma and the reactor partitions. These random, turbulent structures affect how energy flows between the plasma and the reactor.

“Knowing what the blobs are doing strongly constrains the engineering performance that your tokamak power plant needs at the sting,” adds Golfinopoulos.

Researchers use a novel imaging technique to capture video of the plasma’s turbulent edge during experiments. An experimental campaign may last months; a typical day will produce about 30 seconds of information, corresponding to roughly 60 million video frames, with hundreds of blobs appearing each second. This makes it not possible to trace all blobs manually, so researchers depend on average sampling techniques that only provide broad characteristics of blob size, speed, and frequency.

“However, machine learning provides an answer to this by blob-by-blob tracking for each frame, not only average quantities. This provides us way more knowledge about what is occurring on the boundary of the plasma,” Han says.

He and his co-authors took 4 well-established computer vision models, that are commonly used for applications like autonomous driving, and trained them to tackle this problem.

Simulating blobs

To coach these models, they created an enormous dataset of synthetic video clips that captured the blobs’ random and unpredictable nature.

“Sometimes they alter direction or speed, sometimes multiple blobs merge, or they split apart. These sorts of events weren’t considered before with traditional approaches, but we could freely simulate those behaviors within the synthetic data,” Han says.

Creating synthetic data also allowed them to label each blob, which made the training process more practical, Drori adds.

Using these synthetic data, they trained the models to attract boundaries around blobs, teaching them to closely mimic what a human scientist would draw.

Then they tested the models using real video data from experiments. First, they measured how closely the boundaries the models drew matched up with actual blob contours.

But in addition they desired to see if the models predicted objects that humans would discover. They asked three human experts to pinpoint the centers of blobs in video frames and checked to see if the models predicted blobs in those self same locations.

The models were capable of draw accurate blob boundaries, overlapping with brightness contours that are considered ground-truth, about 80 percent of the time. Their evaluations were much like those of human experts, and successfully predicted the theory-defined regime of the blob, which agrees with the outcomes from a standard method.

Now that they’ve shown the success of using synthetic data and computer vision models for tracking blobs, the researchers plan to use these techniques to other problems in fusion research, comparable to estimating particle transport on the boundary of a plasma, Han says.

Additionally they made the dataset and models publicly available, and look ahead to seeing how other research groups apply these tools to check the dynamics of blobs, says Drori.

“Prior to this, there was a barrier to entry that mostly the one people working on this problem were plasma physicists, who had the datasets and were using their methods. There is a large machine-learning and computer-vision community. One goal of this work is to encourage participation in fusion research from the broader machine-learning community toward the broader goal of helping solve the critical problem of climate change,” he adds.

This research is supported, partially, by the U.S. Department of Energy and the Swiss National Science Foundation.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here