Latest AI system could speed up clinical research

-

Annotating regions of interest in medical images, a process generally known as segmentation, is commonly one among the primary steps clinical researchers take when running a brand new study involving biomedical images.

As an example, to find out how the scale of the brain’s hippocampus changes as patients age, the scientist first outlines each hippocampus in a series of brain scans. For a lot of structures and image types, this is commonly a manual process that might be extremely time-consuming, especially if the regions being studied are difficult to delineate.

To streamline the method, MIT researchers developed a synthetic intelligence-based system that permits a researcher to rapidly segment latest biomedical imaging datasets by clicking, scribbling, and drawing boxes on the photographs. This latest AI model uses these interactions to predict the segmentation.

Because the user marks additional images, the variety of interactions they should perform decreases, eventually dropping to zero. The model can then segment each latest image accurately without user input.

It could do that since the model’s architecture has been specially designed to make use of information from images it has already segmented to make latest predictions.

Unlike other medical image segmentation models, this method allows the user to segment a whole dataset without repeating their work for every image.

As well as, the interactive tool doesn’t require a presegmented image dataset for training, so users don’t need machine-learning expertise or extensive computational resources. They will use the system for a brand new segmentation task without retraining the model.

In the long term, this tool could speed up studies of recent treatment methods and reduce the price of clinical trials and medical research. It is also utilized by physicians to enhance the efficiency of clinical applications, reminiscent of radiation treatment planning.

“Many scientists might only have time to segment a couple of images per day for his or her research because manual image segmentation is so time-consuming. Our hope is that this method will enable latest science by allowing clinical researchers to conduct studies they were prohibited from doing before due to lack of an efficient tool,” says Hallee Wong, an electrical engineering and computer science graduate student and lead creator of a paper on this latest tool.

She is joined on the paper by Jose Javier Gonzalez Ortiz PhD ’24; John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering; and senior creator Adrian Dalca, an assistant professor at Harvard Medical School and MGH, and a research scientist within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will probably be presented on the International Conference on Computer Vision.

Streamlining segmentation

There are primarily two methods researchers use to segment latest sets of medical images. With interactive segmentation, they input a picture into an AI system and use an interface to mark areas of interest. The model predicts the segmentation based on those interactions.

A tool previously developed by the MIT researchers, ScribblePrompt, allows users to do that, but they have to repeat the method for every latest image.

One other approach is to develop a task-specific AI model to robotically segment the photographs. This approach requires the user to manually segment a whole lot of images to create a dataset, after which train a machine-learning model. That model predicts the segmentation for a brand new image. However the user must start the complex, machine-learning-based process from scratch for every latest task, and there is no such thing as a technique to correct the model if it makes a mistake.

This latest system, MultiverSeg, combines the very best of every approach. It predicts a segmentation for a brand new image based on user interactions, like scribbles, but additionally keeps each segmented image in a context set that it refers to later.

When the user uploads a brand new image and marks areas of interest, the model draws on the examples in its context set to make a more accurate prediction, with less user input.

The researchers designed the model’s architecture to make use of a context set of any size, so the user doesn’t must have a certain variety of images. This provides MultiverSeg the flexibleness to be utilized in a spread of applications.

“In some unspecified time in the future, for a lot of tasks, you shouldn’t need to supply any interactions. If you’ve got enough examples within the context set, the model can accurately predict the segmentation by itself,” Wong says.

The researchers fastidiously engineered and trained the model on a various collection of biomedical imaging data to make sure it had the flexibility to incrementally improve its predictions based on user input.

The user doesn’t must retrain or customize the model for his or her data. To make use of MultiverSeg for a brand new task, one can upload a brand new medical image and begin marking it.

When the researchers compared MultiverSeg to state-of-the-art tools for in-context and interactive image segmentation, it outperformed each baseline.

Fewer clicks, higher results

Unlike these other tools, MultiverSeg requires less user input with each image. By the ninth latest image, it needed only two clicks from the user to generate a segmentation more accurate than a model designed specifically for the duty.

For some image types, like X-rays, the user might only must segment one or two images manually before the model becomes accurate enough to make predictions by itself.

The tool’s interactivity also enables the user to make corrections to the model’s prediction, iterating until it reaches the specified level of accuracy. In comparison with the researchers’ previous system, MultiverSeg reached 90 percent accuracy with roughly 2/3 the variety of scribbles and three/4 the variety of clicks.

“With MultiverSeg, users can all the time provide more interactions to refine the AI predictions. This still dramatically accelerates the method since it will likely be faster to correct something that exists than to start out from scratch,” Wong says.

Moving forward, the researchers need to test this tool in real-world situations with clinical collaborators and improve it based on user feedback. In addition they need to enable MultiverSeg to segment 3D biomedical images.

This work is supported, partially, by Quanta Computer, Inc. and the National Institutes of Health, with hardware support from the Massachusetts Life Sciences Center.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x