Home Artificial Intelligence MIT researchers mix deep learning and physics to repair motion-corrupted MRI scans

MIT researchers mix deep learning and physics to repair motion-corrupted MRI scans

0
MIT researchers mix deep learning and physics to repair motion-corrupted MRI scans

In comparison with other imaging modalities like X-rays or CT scans, MRI scans provide high-quality soft tissue contrast. Unfortunately, MRI is very sensitive to motion, with even the smallest of movements leading to image artifacts. These artifacts put patients vulnerable to misdiagnoses or inappropriate treatment when critical details are obscured from the physician. But researchers at MIT could have developed a deep learning model able to motion correction in brain MRI.

“Motion is a typical problem in MRI,” explains Nalini Singh, an Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic)-affiliated PhD student within the Harvard-MIT Program in Health Sciences and Technology (HST) and lead creator of the paper. “It’s a fairly slow imaging modality.”

MRI sessions can take anywhere from a couple of minutes to an hour, depending on the form of images required. Even in the course of the shortest scans, small movements can have dramatic effects on the resulting image. Unlike camera imaging, where motion typically manifests as a localized blur, motion in MRI often leads to artifacts that may corrupt the entire image. Patients could also be anesthetized or requested to limit deep respiration so as to minimize motion. Nevertheless, these measures often can’t be taken in populations particularly vulnerable to motion, including children and patients with psychiatric disorders. 

The paper, titled “Data Consistent Deep Rigid MRI Motion Correction,” was recently awarded best oral presentation on the Medical Imaging with Deep Learning conference (MIDL) in Nashville, Tennessee. The strategy computationally constructs a motion-free image from motion-corrupted data without changing anything concerning the scanning procedure. “Our aim was to mix physics-based modeling and deep learning to get one of the best of each worlds,” Singh says.

The importance of this combined approach lies inside ensuring consistency between the image output and the actual measurements of what’s being depicted, otherwise the model creates “hallucinations” — images that appear realistic, but are physically and spatially inaccurate, potentially worsening outcomes with regards to diagnoses.

Procuring an MRI freed from motion artifacts, particularly from patients with neurological disorders that cause involuntary movement, similar to Alzheimer’s or Parkinson’s disease, would profit greater than just patient outcomes. A study from the University of Washington Department of Radiology estimated that motion affects 15 percent of brain MRIs. Motion in all sorts of MRI that results in repeated scans or imaging sessions to acquire images with sufficient quality for diagnosis leads to roughly $115,000 in hospital expenditures per scanner on an annual basis.

In keeping with Singh, future work could explore more sophisticated kinds of head motion in addition to motion in other body parts. For example, fetal MRI suffers from rapid, unpredictable motion that can’t be modeled only by easy translations and rotations. 

“This line of labor from Singh and company is the following step in MRI motion correction. Not only is it excellent research work, but I think these methods might be utilized in all types of clinical cases: children and older folks who cannot sit still within the scanner, pathologies which induce motion, studies of moving tissue, even healthy patients will move within the magnet,” says Daniel Moyer, an assistant professor at Vanderbilt University. “In the long run, I believe that it likely might be standard practice to process images with something directly descended from this research.”

Co-authors of this paper include Nalini Singh, Neel Dey, Malte Hoffmann, Bruce Fischl, Elfar Adalsteinsson, Robert Frost, Adrian Dalca and Polina Golland. This research was supported partly by GE Healthcare and by computational hardware provided by the Massachusetts Life Sciences Center. The research team thanks Steve Cauley for helpful discussions. Additional support was provided by NIH NIBIB, NIA, NIMH, NINDS, the Blueprint for Neuroscience Research, a part of the multi-institutional Human Connectome Project, the BRAIN Initiative Cell Census Network, and a Google PhD Fellowship.

LEAVE A REPLY

Please enter your comment!
Please enter your name here