Why it’s so hard to make use of AI to diagnose cancer

-

In theory, artificial intelligence needs to be great at helping out. “Our job is pattern recognition,” says Andrew Norgan, a pathologist and medical director of the Mayo Clinic’s digital pathology platform. “We have a look at the slide and we gather pieces of knowledge which have been proven to be essential.” 

Visual evaluation is something that AI has gotten quite good at because the first image recognition models began taking off nearly 15 years ago. Despite the fact that no model will likely be perfect, you may imagine a strong algorithm someday catching something that a human pathologist missed, or not less than speeding up the means of getting a diagnosis. We’re beginning to see a lot of recent efforts to construct such a model—not less than seven attempts within the last 12 months alone—but all of them remain experimental. What is going to it take to make them adequate to be utilized in the actual world?

Details concerning the latest effort to construct such a model, led by the AI health company Aignostics with the Mayo Clinic, were published on arXiv earlier this month. The paper has not been peer-reviewed, however it reveals much concerning the challenges of bringing such a tool to real clinical settings. 

The model, called Atlas, was trained on 1.2 million tissue samples from 490,000 cases. Its accuracy was tested against six other leading AI pathology models. These models compete on shared tests like classifying breast cancer images or grading tumors, where the model’s predictions are compared with the proper answers given by human pathologists. Atlas beat rival models on six out of nine tests. It earned its highest rating for categorizing cancerous colorectal tissue, reaching the identical conclusion as human pathologists 97.1% of the time. For an additional task, though—classifying tumors from prostate cancer biopsies—Atlas beat the opposite models’ high scores with a rating of just 70.5%. Its average across nine benchmarks showed that it got the identical answers as human experts 84.6% of the time. 

Let’s take into consideration what this implies. One of the best method to know what’s happening to cancerous cells in tissues is to have a sample examined by a pathologist, in order that’s the performance that AI models are measured against. One of the best models are approaching humans specifically detection tasks but lagging behind in lots of others. So how good does a model should be to be clinically useful?

“Ninety percent might be not adequate. You want to be even higher,” says Carlo Bifulco, chief medical officer at Windfall Genomics and co-creator of GigaPath, one among the opposite AI pathology models examined within the Mayo Clinic study. But, Bifulco says, AI models that don’t rating perfectly can still be useful within the short term, and will potentially help pathologists speed up their work and make diagnoses more quickly.    

What obstacles are getting in the best way of higher performance? Problem primary is training data.

“Fewer than 10% of pathology practices within the US are digitized,” Norgan says. Which means tissue samples are placed on slides and analyzed under microscopes, after which stored in massive registries without ever being documented digitally. Though European practices are likely to be more digitized, and there are efforts underway to create shared data sets of tissue samples for AI models to coach on, there’s still not a ton to work with. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x