Cargando…

Putting the Personalized Metabolic Avatar into Production: A Comparison between Deep-Learning and Statistical Models for Weight Prediction

Nutrition is a cross-cutting sector in medicine, with a huge impact on health, from cardiovascular disease to cancer. Employment of digital medicine in nutrition relies on digital twins: digital replicas of human physiology representing an emergent solution for prevention and treatment of many disea...

Descripción completa

Detalles Bibliográficos
Autores principales: Abeltino, Alessio, Bianchetti, Giada, Serantoni, Cassandra, Riente, Alessia, De Spirito, Marco, Maulucci, Giuseppe
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10004838/
https://www.ncbi.nlm.nih.gov/pubmed/36904199
http://dx.doi.org/10.3390/nu15051199
Descripción
Sumario:Nutrition is a cross-cutting sector in medicine, with a huge impact on health, from cardiovascular disease to cancer. Employment of digital medicine in nutrition relies on digital twins: digital replicas of human physiology representing an emergent solution for prevention and treatment of many diseases. In this context, we have already developed a data-driven model of metabolism, called a “Personalized Metabolic Avatar” (PMA), using gated recurrent unit (GRU) neural networks for weight forecasting. However, putting a digital twin into production to make it available for users is a difficult task that as important as model building. Among the principal issues, changes to data sources, models and hyperparameters introduce room for error and overfitting and can lead to abrupt variations in computational time. In this study, we selected the best strategy for deployment in terms of predictive performance and computational time. Several models, such as the Transformer model, recursive neural networks (GRUs and long short-term memory networks) and the statistical SARIMAX model were tested on ten users. PMAs based on GRUs and LSTM showed optimal and stable predictive performances, with the lowest root mean squared errors (0.38 ± 0.16–0.39 ± 0.18) and acceptable computational times of the retraining phase (12.7 ± 1.42 s–13.5 ± 3.60 s) for a production environment. While the Transformer model did not bring a substantial improvement over RNNs in term of predictive performance, it increased the computational time for both forecasting and retraining by 40%. The SARIMAX model showed the worst performance in term of predictive performance, though it had the best computational time. For all the models considered, the extent of the data source was a negligible factor, and a threshold was established for the number of time points needed for a successful prediction.