Home Artificial Intelligence A pose-mapping technique could remotely evaluate patients with cerebral palsy

A pose-mapping technique could remotely evaluate patients with cerebral palsy

2
A pose-mapping technique could remotely evaluate patients with cerebral palsy

It could be a hassle to get to the doctor’s office. And the duty may be especially difficult for fogeys of kids with motor disorders comparable to cerebral palsy, as a clinician must evaluate the kid in person regularly, often for an hour at a time. Making it to those frequent evaluations may be expensive, time-consuming, and emotionally taxing.

MIT engineers hope to alleviate a few of that stress with a latest method that remotely evaluates patients’ motor function. By combining computer vision and machine-learning techniques, the tactic analyzes videos of patients in real-time and computes a clinical rating of motor function based on certain patterns of poses that it detects in video frames.

The researchers tested the tactic on videos of greater than 1,000 children with cerebral palsy. They found the tactic could process each video and assign a clinical rating that matched with over 70 percent accuracy what a clinician had previously determined during an in-person visit.

The video evaluation may be run on a spread of mobile devices. The team envisions that patients may be evaluated on their progress just by organising their phone or tablet to take a video as they move about their very own home. They might then load the video right into a program that will quickly analyze the video frames and assign a clinical rating, or level of progress. The video and the rating could then be sent to a physician for review.

The team is now tailoring the approach to judge children with metachromatic leukodystrophy — a rare genetic disorder that affects the central and peripheral nervous system. In addition they hope to adapt the tactic to evaluate patients who’ve experienced a stroke.

“We wish to scale back slightly of patients’ stress by not having to go to the hospital for each evaluation,” says Hermano Krebs, principal research scientist at MIT’s Department of Mechanical Engineering. “We predict this technology could potentially be used to remotely evaluate any condition that affects motor behavior.”

Krebs and his colleagues will present their latest approach on the IEEE Conference on Body Sensor Networks in October. The study’s MIT authors are first creator Peijun Zhao, co-principal investigator Moises Alencastre-Miranda, Zhan Shen, and Ciaran O’Neill, together with David Whiteman and Javier Gervas-Arruga of Takeda Development Center Americas, Inc.

Network training

At MIT, Krebs develops robotic systems that physically work with patients to assist them regain or strengthen motor function. He has also adapted the systems to gauge patients’ progress and predict what therapies could work best for them. While these technologies have worked well, they’re significantly limited of their accessibility: Patients must travel to a hospital or facility where the robots are in place.  

“We asked ourselves, how could we expand the nice results we got with rehab robots to a ubiquitous device?” Krebs recalls. “As smartphones are all over the place, our goal was to reap the benefits of their capabilities to remotely assess individuals with motor disabilities, in order that they might be evaluated anywhere.”

A latest MIT method incorporates real-time skeleton pose data comparable to the one pictured, to remotely analyze the videos of kids with cerebral palsy, and robotically assign a clinical level of motor function.

Image: Dataset created by Stanford Neuromuscular Biomechanics Laboratory in collaboration with Gillette Children’s Specialty Healthcare

The researchers looked first to computer vision and algorithms that estimate human movements. In recent times, scientists have developed pose estimation algorithms which can be designed to take a video — for example, of a woman kicking a soccer ball — and translate her movements right into a corresponding series of skeleton poses, in real-time. The resulting sequence of lines and dots may be mapped to coordinates that scientists can further analyze.

Krebs and his colleagues aimed to develop a way to research skeleton pose data of patients with cerebral palsy — a disorder that has traditionally been evaluated along the Gross Motor Function Classification System (GMFCS), a five-level scale that represents a toddler’s general motor function. (The lower the number, the upper the kid’s mobility.)

The team worked with a publicly available set of skeleton pose data that was produced by Stanford University’s Neuromuscular Biomechanics Laboratory. This dataset comprised videos of greater than 1,000 children with cerebral palsy. Each video showed a toddler performing a series of exercises in a clinical setting, and every video was tagged with a GMFCS rating that a clinician assigned the kid after the in-person assessment. The Stanford group ran the videos through a pose estimation algorithm to generate skeleton pose data, which the MIT group then used as a start line for his or her study.

The researchers then looked for tactics to robotically decipher patterns within the cerebral palsy data which can be characteristic of every clinical motor function level. They began with a Spatial-Temporal Graph Convolutional Neural Network — a machine-learning process that trains a pc to process spatial data that changes over time, comparable to a sequence of skeleton poses, and assign a classification.

Before the team applied the neural network to cerebral palsy, they utilized a model that had been pretrained on a more general dataset, which contained videos of healthy adults performing various every day activities like walking, running, sitting, and shaking hands. They took the backbone of this pretrained model and added to it a latest classification layer, specific to the clinical scores related to cerebral palsy. They fine-tuned the network to acknowledge distinctive patterns inside the movements of kids with cerebral palsy and accurately classify them inside the important clinical assessment levels.

They found that the pretrained network learned to appropriately classify children’s mobility levels, and it did so more accurately than if it were trained only on the cerebral palsy data.

“Since the network is trained on a really large dataset of more general movements, it has some ideas about methods to extract features from a sequence of human poses,” Zhao explains. “While the larger dataset and the cerebral palsy dataset may be different, they share some common patterns of human actions and how those actions may be encoded.”

The team test-ran their method on various mobile devices, including various smartphones, tablets, and laptops, and located that almost all devices could successfully run this system and generate a clinical rating from videos, in near real-time.

The researchers are actually developing an app, which they envision parents and patients could at some point use to robotically analyze videos of patients, taken within the comfort of their very own environment. The outcomes could then be sent to a physician for further evaluation. The team can be planning to adapt the tactic to judge other neurological disorders.

“This approach might be easily expandable to other disabilities comparable to stroke or Parkinson’s disease once it’s tested in that population using appropriate metrics for adults,” says Alberto Esquenazi, chief medical officer at Moss Rehabilitation Hospital in Philadelphia, who was not involved within the study. “It could improve care and reduce the general cost of health care and the necessity for families to lose productive work time, and it’s my hope [that it could] increase compliance.”

“In the long run, this may additionally help us predict how patients would reply to interventions sooner,” Krebs says. “Because we could evaluate them more often, to see if an intervention is having an impact.”

This research was supported by Takeda Development Center Americas, Inc.

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here