Cargando…
Improvement of a prediction model for heart failure survival through explainable artificial intelligence
Cardiovascular diseases and their associated disorder of heart failure (HF) are major causes of death globally, making it a priority for doctors to detect and predict their onset and medical consequences. Artificial Intelligence (AI) allows doctors to discover clinical indicators and enhance their d...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10434534/ https://www.ncbi.nlm.nih.gov/pubmed/37600061 http://dx.doi.org/10.3389/fcvm.2023.1219586 |
_version_ | 1785091915884003328 |
---|---|
author | Moreno-Sánchez, Pedro A. |
author_facet | Moreno-Sánchez, Pedro A. |
author_sort | Moreno-Sánchez, Pedro A. |
collection | PubMed |
description | Cardiovascular diseases and their associated disorder of heart failure (HF) are major causes of death globally, making it a priority for doctors to detect and predict their onset and medical consequences. Artificial Intelligence (AI) allows doctors to discover clinical indicators and enhance their diagnoses and treatments. Specifically, “eXplainable AI” (XAI) offers tools to improve the clinical prediction models that experience poor interpretability of their results. This work presents an explainability analysis and evaluation of two HF survival prediction models using a dataset that includes 299 patients who have experienced HF. The first model utilizes survival analysis, considering death events and time as target features, while the second model approaches the problem as a classification task to predict death. The model employs an optimization data workflow pipeline capable of selecting the best machine learning algorithm as well as the optimal collection of features. Moreover, different post hoc techniques have been used for the explainability analysis of the model. The main contribution of this paper is an explainability-driven approach to select the best HF survival prediction model balancing prediction performance and explainability. Therefore, the most balanced explainable prediction models are Survival Gradient Boosting model for the survival analysis and Random Forest for the classification approach with a c-index of 0.714 and balanced accuracy of 0.74 (std 0.03) respectively. The selection of features by the SCI-XAI in the two models is similar where “serum_creatinine”, “ejection_fraction”, and “sex” are selected in both approaches, with the addition of “diabetes” for the survival analysis model. Moreover, the application of post hoc XAI techniques also confirm common findings from both approaches by placing the “serum_creatinine” as the most relevant feature for the predicted outcome, followed by “ejection_fraction”. The explainable prediction models for HF survival presented in this paper would improve the further adoption of clinical prediction models by providing doctors with insights to better understand the reasoning behind usually “black-box” AI clinical solutions and make more reasonable and data-driven decisions. |
format | Online Article Text |
id | pubmed-10434534 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-104345342023-08-18 Improvement of a prediction model for heart failure survival through explainable artificial intelligence Moreno-Sánchez, Pedro A. Front Cardiovasc Med Cardiovascular Medicine Cardiovascular diseases and their associated disorder of heart failure (HF) are major causes of death globally, making it a priority for doctors to detect and predict their onset and medical consequences. Artificial Intelligence (AI) allows doctors to discover clinical indicators and enhance their diagnoses and treatments. Specifically, “eXplainable AI” (XAI) offers tools to improve the clinical prediction models that experience poor interpretability of their results. This work presents an explainability analysis and evaluation of two HF survival prediction models using a dataset that includes 299 patients who have experienced HF. The first model utilizes survival analysis, considering death events and time as target features, while the second model approaches the problem as a classification task to predict death. The model employs an optimization data workflow pipeline capable of selecting the best machine learning algorithm as well as the optimal collection of features. Moreover, different post hoc techniques have been used for the explainability analysis of the model. The main contribution of this paper is an explainability-driven approach to select the best HF survival prediction model balancing prediction performance and explainability. Therefore, the most balanced explainable prediction models are Survival Gradient Boosting model for the survival analysis and Random Forest for the classification approach with a c-index of 0.714 and balanced accuracy of 0.74 (std 0.03) respectively. The selection of features by the SCI-XAI in the two models is similar where “serum_creatinine”, “ejection_fraction”, and “sex” are selected in both approaches, with the addition of “diabetes” for the survival analysis model. Moreover, the application of post hoc XAI techniques also confirm common findings from both approaches by placing the “serum_creatinine” as the most relevant feature for the predicted outcome, followed by “ejection_fraction”. The explainable prediction models for HF survival presented in this paper would improve the further adoption of clinical prediction models by providing doctors with insights to better understand the reasoning behind usually “black-box” AI clinical solutions and make more reasonable and data-driven decisions. Frontiers Media S.A. 2023-08-01 /pmc/articles/PMC10434534/ /pubmed/37600061 http://dx.doi.org/10.3389/fcvm.2023.1219586 Text en © 2023 Moreno-Sánchez. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) (https://creativecommons.org/licenses/by/4.0/) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Cardiovascular Medicine Moreno-Sánchez, Pedro A. Improvement of a prediction model for heart failure survival through explainable artificial intelligence |
title | Improvement of a prediction model for heart failure survival through explainable artificial intelligence |
title_full | Improvement of a prediction model for heart failure survival through explainable artificial intelligence |
title_fullStr | Improvement of a prediction model for heart failure survival through explainable artificial intelligence |
title_full_unstemmed | Improvement of a prediction model for heart failure survival through explainable artificial intelligence |
title_short | Improvement of a prediction model for heart failure survival through explainable artificial intelligence |
title_sort | improvement of a prediction model for heart failure survival through explainable artificial intelligence |
topic | Cardiovascular Medicine |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10434534/ https://www.ncbi.nlm.nih.gov/pubmed/37600061 http://dx.doi.org/10.3389/fcvm.2023.1219586 |
work_keys_str_mv | AT morenosanchezpedroa improvementofapredictionmodelforheartfailuresurvivalthroughexplainableartificialintelligence |