Cargando…

Neural networks based on attention architecture are robust to data missingness for early predicting hospital mortality in intensive care unit patients

BACKGROUND: Although the machine learning model developed on electronic health records has become a promising method for early predicting hospital mortality, few studies focus on the approaches for handling missing data in electronic health records and evaluate model robustness to data missingness....

Descripción completa

Detalles Bibliográficos
Autores principales: Zeng, Zhixuan, Liu, Yang, Yao, Shuo, Liu, Jiqiang, Xiao, Bing, Liu, Chenxue, Gong, Xun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: SAGE Publications 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10170607/
https://www.ncbi.nlm.nih.gov/pubmed/37179744
http://dx.doi.org/10.1177/20552076231171482
_version_ 1785039264948420608
author Zeng, Zhixuan
Liu, Yang
Yao, Shuo
Liu, Jiqiang
Xiao, Bing
Liu, Chenxue
Gong, Xun
author_facet Zeng, Zhixuan
Liu, Yang
Yao, Shuo
Liu, Jiqiang
Xiao, Bing
Liu, Chenxue
Gong, Xun
author_sort Zeng, Zhixuan
collection PubMed
description BACKGROUND: Although the machine learning model developed on electronic health records has become a promising method for early predicting hospital mortality, few studies focus on the approaches for handling missing data in electronic health records and evaluate model robustness to data missingness. This study proposes an attention architecture that shows excellent predictive performance and is robust to data missingness. METHODS: Two public intensive care unit databases were used for model training and external validation, respectively. Three neural networks (masked attention model, attention model with imputation, attention model with missing indicator) based on the attention architecture were developed, using masked attention mechanism, multiple imputation, and missing indicator to handle missing data, respectively. Model interpretability was analyzed by attention allocations. Extreme gradient boosting, logistic regression with multiple imputation and missing indicator (logistic regression with imputation, logistic regression with missing indicator) were used as baseline models. Model discrimination and calibration were evaluated by area under the receiver operating characteristic curve, area under precision-recall curve, and calibration curve. In addition, model robustness to data missingness in both model training and validation was evaluated by three analyses. RESULTS: In total, 65,623 and 150,753 intensive care unit stays were respectively included in the training set and the test set, with mortality of 10.1% and 8.5%, and overall missing rate of 10.3% and 19.7%. attention model with missing indicator had the highest area under the receiver operating characteristic curve (0.869; 95% CI: 0.865 to 0.873) in external validation; attention model with imputation had the highest area under precision-recall curve (0.497; 95% CI: 0.480–0.513). Masked attention model and attention model with imputation showed better calibration than other models. The three neural networks showed different patterns of attention allocation. In terms of robustness to data missingness, masked attention model and attention model with missing indicator are more robust to missing data in model training; while attention model with imputation is more robust to missing data in model validation. CONCLUSIONS: The attention architecture has the potential to become an excellent model architecture for clinical prediction task with data missingness.
format Online
Article
Text
id pubmed-10170607
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher SAGE Publications
record_format MEDLINE/PubMed
spelling pubmed-101706072023-05-11 Neural networks based on attention architecture are robust to data missingness for early predicting hospital mortality in intensive care unit patients Zeng, Zhixuan Liu, Yang Yao, Shuo Liu, Jiqiang Xiao, Bing Liu, Chenxue Gong, Xun Digit Health Original Research BACKGROUND: Although the machine learning model developed on electronic health records has become a promising method for early predicting hospital mortality, few studies focus on the approaches for handling missing data in electronic health records and evaluate model robustness to data missingness. This study proposes an attention architecture that shows excellent predictive performance and is robust to data missingness. METHODS: Two public intensive care unit databases were used for model training and external validation, respectively. Three neural networks (masked attention model, attention model with imputation, attention model with missing indicator) based on the attention architecture were developed, using masked attention mechanism, multiple imputation, and missing indicator to handle missing data, respectively. Model interpretability was analyzed by attention allocations. Extreme gradient boosting, logistic regression with multiple imputation and missing indicator (logistic regression with imputation, logistic regression with missing indicator) were used as baseline models. Model discrimination and calibration were evaluated by area under the receiver operating characteristic curve, area under precision-recall curve, and calibration curve. In addition, model robustness to data missingness in both model training and validation was evaluated by three analyses. RESULTS: In total, 65,623 and 150,753 intensive care unit stays were respectively included in the training set and the test set, with mortality of 10.1% and 8.5%, and overall missing rate of 10.3% and 19.7%. attention model with missing indicator had the highest area under the receiver operating characteristic curve (0.869; 95% CI: 0.865 to 0.873) in external validation; attention model with imputation had the highest area under precision-recall curve (0.497; 95% CI: 0.480–0.513). Masked attention model and attention model with imputation showed better calibration than other models. The three neural networks showed different patterns of attention allocation. In terms of robustness to data missingness, masked attention model and attention model with missing indicator are more robust to missing data in model training; while attention model with imputation is more robust to missing data in model validation. CONCLUSIONS: The attention architecture has the potential to become an excellent model architecture for clinical prediction task with data missingness. SAGE Publications 2023-05-07 /pmc/articles/PMC10170607/ /pubmed/37179744 http://dx.doi.org/10.1177/20552076231171482 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by-nc/4.0/This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
spellingShingle Original Research
Zeng, Zhixuan
Liu, Yang
Yao, Shuo
Liu, Jiqiang
Xiao, Bing
Liu, Chenxue
Gong, Xun
Neural networks based on attention architecture are robust to data missingness for early predicting hospital mortality in intensive care unit patients
title Neural networks based on attention architecture are robust to data missingness for early predicting hospital mortality in intensive care unit patients
title_full Neural networks based on attention architecture are robust to data missingness for early predicting hospital mortality in intensive care unit patients
title_fullStr Neural networks based on attention architecture are robust to data missingness for early predicting hospital mortality in intensive care unit patients
title_full_unstemmed Neural networks based on attention architecture are robust to data missingness for early predicting hospital mortality in intensive care unit patients
title_short Neural networks based on attention architecture are robust to data missingness for early predicting hospital mortality in intensive care unit patients
title_sort neural networks based on attention architecture are robust to data missingness for early predicting hospital mortality in intensive care unit patients
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10170607/
https://www.ncbi.nlm.nih.gov/pubmed/37179744
http://dx.doi.org/10.1177/20552076231171482
work_keys_str_mv AT zengzhixuan neuralnetworksbasedonattentionarchitecturearerobusttodatamissingnessforearlypredictinghospitalmortalityinintensivecareunitpatients
AT liuyang neuralnetworksbasedonattentionarchitecturearerobusttodatamissingnessforearlypredictinghospitalmortalityinintensivecareunitpatients
AT yaoshuo neuralnetworksbasedonattentionarchitecturearerobusttodatamissingnessforearlypredictinghospitalmortalityinintensivecareunitpatients
AT liujiqiang neuralnetworksbasedonattentionarchitecturearerobusttodatamissingnessforearlypredictinghospitalmortalityinintensivecareunitpatients
AT xiaobing neuralnetworksbasedonattentionarchitecturearerobusttodatamissingnessforearlypredictinghospitalmortalityinintensivecareunitpatients
AT liuchenxue neuralnetworksbasedonattentionarchitecturearerobusttodatamissingnessforearlypredictinghospitalmortalityinintensivecareunitpatients
AT gongxun neuralnetworksbasedonattentionarchitecturearerobusttodatamissingnessforearlypredictinghospitalmortalityinintensivecareunitpatients