This machine learning study investigated the development and performance of an AI-driven malnutrition assessment toolbox designed for elderly trauma patients in intensive care units. The research compared various machine learning models, including logistic regression, random forests, XGBoost, and neural-network-based ensemble models, using different indicator configurations and longitudinal data from Day 1 and Day 3.
The study found that baseline data from Day 1 alone did not provide a reliable prediction of malnutrition risk. However, the use of longitudinal measurements substantially improved prediction performance. Among the models tested, tree-based methods consistently outperformed linear and distance-based models. Specifically, a three-time-point XGBoost model achieved the best individual performance, while neural-network-based ensemble models further improved predictive stability. Additionally, models utilizing a minimal indicator set—including bilateral mid-upper arm circumference, calf circumference, and key static variables—outperformed models using the full indicator set. The best overall performance was achieved by an ensemble model using the minimal indicator set from Day 1 and Day 3.
While the toolbox offers a potentially efficient and clinically feasible approach for early malnutrition assessment that could be integrated into clinical workflows or digital twin systems, the study focuses on predictive modeling and association rather than establishing clinical causation. The findings suggest that specific model architectures and longitudinal data points are critical for predictive accuracy in this population.
View Original Abstract ↓
Background & Aims: Accurate assessment of clinical malnutrition using anthropometric and functional indicators could improve the care of elderly trauma patients in intensive care units (ICUs). This study aimed to develop an AI-driven malnutrition assessment toolbox based on a minimal set of clinically feasible indicators. Methods: Multiple machine learning models, including logistic regression, support vector machines, k-nearest neighbors, decision trees, random forests, XGBoost, and neural-network-based ensemble models, were developed using different indicator configurations from a clinically collected patient dataset. Models were trained using baseline and longitudinal measurements to predict malnutrition risk. SHAP analysis was used to interpret the importance of selected indicators. Results: Baseline (Day 1) data alone did not provide a reliable prediction, whereas longitudinal measurements substantially improved performance. Models based on a minimal indicator set, including bilateral mid-upper arm circumference, calf circumference, and key static variables, outperformed models using the full indicator set. Tree-based methods consistently outperformed linear and distance-based models, with the three-time-point XGBoost achieving the best individual performance. Neural-network-based ensemble models further improved predictive stability. The best overall performance was achieved by the ensemble model using the minimal indicator set from Day 1 and Day 3. SHAP analysis confirmed the importance of the selected indicators. Conclusions: This AI-driven toolbox provides an efficient and clinically feasible approach for early malnutrition assessment in elderly trauma patients in the ICU. Its strong performance with a minimal indicator set supports its potential for integration into clinical workflows and future digital twin systems for intelligent nutritional management.