Home Artificial Intelligence Final DXA-nation

Final DXA-nation

10
Final DXA-nation

Longitudinal Image-based AI Models for Health and Medicine

AI can see the tip! Deep learning predicts all-cause mortality from single and sequential body composition imaging

DXA imaging affords many varieties of body composition visualizations. (Image by Creator)

Key Points, TLDR:

  • The mix of body composition imaging and meta-data (e.g. age, sex, grip strength, walking speed, etc) resulted in the very best 10 yr mortality predictions
  • Longitudinal or sequential models overall performed higher than single record models, highlighting the importance of modeling change and time dependencies in health data.
  • Longitudinal models have the potential to supply a more comprehensive assessment of 1’s health
  • Read the paper

Artificial intelligence (AI) and machine learning (ML) are revolutionizing healthcare, driving us toward the era of precision medicine. The motivation to develop AI health models is to cut back deaths and disease in addition to lengthen a top quality of life. Well trained models have the flexibility to more thoroughly analyze data that’s presented which offers a more comprehensive assessment of 1’s health.

Image-based medical AI/ML models have now reached a maturity where they often rival and even surpass human performance, adeptly identifying patterns and anomalies that might easily elude the human eye. Nevertheless, the vast majority of these models still operate on single time-point data, providing an isolated snapshot of health at one specific instance. Whether these are uni-modal or multi-modal models, they have an inclination to work with data gathered inside a comparatively similar timeframe, forming the inspiration of a prediction. Yet, within the broader context of AI/ML for medical applications, these single time-point models represent just step one — the proverbial ‘low hanging fruit.’ One frontier of medical AI research is longitudinal models which provide a more holistic view of an individual’s health over time.

Longitudinal models are designed to integrate data from multiple time-points, capturing a person’s health trajectory reasonably than a standalone moment. These models tap into the dynamic nature of human health, where physiological changes are constant. The power to map these changes to specific outcomes or health questions could possibly be a game-changer in predictive healthcare. The concept of longitudinal data isn’t recent to clinical practice — it’s often used to watch aging and predict frailty. A first-rate example is the tracking of bone mineral density (BMD), a key marker for osteoporosis and frailty. Regular assessments of BMD can detect significant decreases, indicating potential health risks.

Historically, the event of longitudinal models has faced several significant challenges. Except for larger data volumes and computation required per individual, probably the most critical obstacle lies within the curation of longitudinal medical data itself. Unlike single time-point data, longitudinal data involves tracking patients’ health information over prolonged periods, often across multiple healthcare institutions. This requires meticulous data organization and management, making the curation process each time-consuming and expensive. Multiple successful studies have been funded to prospectively collect longitudinal data. These studies report challenges with respect to patient retention over an extended commentary period. Hence, despite the potential advantages of longitudinal models, their development has remained a fancy, resource-intensive endeavor.

Changes in body composition, proportions of lean and fat soft tissue and bone, are known to be related to mortality. In our study, we aimed to make use of body composition information to higher predict all-cause mortality, in simpler terms, the likely timeline of an individual’s life. We evaluated the performance of models built on each single time-point and longitudinal data, respectively known as our ‘single record’ and ‘sequential’ models. Single record models allowed us to judge what form of information was most predictive of mortality. Development of sequential models were for the needs of capturing change over time and evaluating how that affects mortality predictions.

The information for this study was acquired from a longitudinal often known as the Health, Aging, and Body Composition (Health ABC) study during which over 3000 older, multi-race female and male adults were followed and monitored for as much as 16 years. This study resulted in a wealthy and comprehensive longitudinal data set. As a component of this study patients received total body dual energy X-ray absorptiometry (TBDXA) imaging and several other pieces of meta-data were collected (see table XXX). Consistent with best modeling practices and to avoid data leakage or mitigate overfitting, the information was split right into a train, validation, and hold-out test set using a 70%/10%/20% split.

We quantify body composition using total body dual energy X-ray absorptiometry (TBDXA) imaging which has long been considered a gold standard imaging modality. Historically, patient meta-data which include variables like age, body mass index (BMI), grip strength, walking speed, etc were used to evaluate aging/mortality and used as surrogate measurement of body composition. The prevalent use of patient meta-data and surrogate measures of body composition were driven by the limited accessibility to DXA scanners. Accessibility has improved greatly as of recent with scans becoming cheaper and now not needing a physician referral/order/prescription.

Three single record models were built each with different data inputs but all with the identical output which was a ten yr mortality probability. The primary model was built to only take patient meta-data and is a neural network with a single 32-unit, ReLU activation hidden layer and sigmoid prediction layer. The second model used only TBDXA images as input and it consisted of a modified Densenet121 which was modified to handle the 2 color channels as opposed to 3 color channels (RGB) seen in most natural images. The twin energy nature of DXA ends in a high and low X-ray images that are fully registered and stacked into two image channels. The third model combines the meta-data embedding of model one with the TBDXA image embeddings of model two then passes it through a 512-unit, a 64-unit fully-connected ReLU layer to make, and lastly a sigmoid prediction layer.

Diagram of knowledge inputs, model architectures, and methods for single record models (Image by Authors)

Three sequential models were built and evaluated. The one record model architectures served as the bottom for every sequential model however the sigmoid prediction layers were removed in order that the output was a vector representing feature embeddings. Over the course of the study data was collected from each patient at multiple time points. The information from every time point was input into the suitable models to amass the corresponding feature vector. The feature vectors for every patient were ordered and stacked right into a sequence. A Long Short Term Memory (LSTM) model was trained to take the sequence of feature vectors and output a ten yr mortality prediction. As previously mentioned, there are several difficulties with conducting long run studies with retention and data collection being a standard problem. Our study was not absent of those problems and a few patients had more data points that others because of this. An LSTM model was chosen because the sequence modeling approach because they will not be constrained to make use of the identical input sequence length for every patient. I.e. LSTMs can work with sequences of various length thus eliminating the necessity to pad sequences if patients were short the complete set of knowledge points (~10).

Diagram of knowledge inputs, model architectures, and methods for sequential models (Image by Authors)

Area under the receiver operating characteristic (AUROC) on the hold-out test set show that metadata performs higher than using TBDXA image alone in each the only record and sequential models. Nevertheless, combining meta-data and TBDXA imaging resulted in the very best AUROCs in each modeling paradigms which indicates that imaging accommodates useful information, predictive of mortality that just isn’t captured by the meta-data. One other option to interpret that is that the meta-data will not be a full surrogate measure of body composition with respect to predicting mortality. In the event that they were full surrogates, combining TBDXA imaging with meta-data would have resulted in no significant increase or change in AUROC. The proven fact that the mixture resulted in higher AUROCs indicates that imaging is providing orthogonal information beyond what the meta-data capture and further justifies the utility of imaging.

Single Record and Sequential Models AUC Performance (Image by Authors)

Longitudinal or sequential models overall performed higher than single record models. That is true across all modeling approaches and input data types (meta-data, image only, combined meta-data and image). These results exhibit the importance of modeling change and the time dependencies of health data.

We performed an Integrated Discrimination Improvement (IDI) evaluation to judge the advantages of mixing imaging with metadata, in comparison with using metadata alone. This evaluation was conducted on the sequence models, which outperformed the single-record models. The IDI was found to be 5.79, with an integrated sensitivity and specificity of three.46 and a pair of.33, respectively. This means that the mixture of imaging and metadata improves the model’s ability to appropriately discover those that won’t survive the subsequent 10 years by 3.46%, and enhances the flexibility to appropriately discover those that will survive the subsequent 10 years by 2.33%. Overall, this implies an improvement in model performance of roughly 5.8%.

Integrated Discrimination Improvement (IDI) evaluation results (Image by Authors)

Our study underscores the promising potential of longitudinal AI/ML models within the realm of predictive healthcare, specifically within the context of all-cause mortality. The comparative evaluation of single record models and longitudinal models revealed that the latter offers superior performance, indicating the critical role of modeling change over time in health data evaluation. The clinical implication of our findings include the flexibility to supply a more precise and holistic assessment of 1’s health through models that account for a patient’s historical or longitudinal data. While the information needed for developing longitudinal health models exists, the right infrastructure and institutional support just isn’t quite oriented yet to enable efficient data curation and development of those models at scale. Nevertheless, many are working to beat these hurdles and the event of longitudinal models is one in every of many exciting frontiers for AI in medicine.

The clinical implications of those findings are far-reaching. Longitudinal models have the potential to remodel care delivery by enabling more precise, personalized predictions a few patient’s health trajectory. Such models can inform proactive interventions, thereby enhancing care outcomes and possibly even prolonging life. Furthermore, the usage of each metadata and imaging data sets a recent precedent for future AI/ML models, suggesting a synergistic approach for optimal results. It reinforces the necessity for multidimensional, nuanced data to color an accurate and holistic picture of a patient’s health. These findings represent significant strides in the applying of AI/ML in healthcare, highlighting an exciting path forward in our pursuit of precision medicine.

10 COMMENTS

  1. Nice work brother, I am grateful for your efforts, we are very happy that you share such content with us.. <a href=”https://www.todaytrendnews.net/sports/transfer-news/” rel=”nofollow”>Transfer News</a>

LEAVE A REPLY

Please enter your comment!
Please enter your name here