Cargando…

Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome

State-of-the-art machine learning (ML) artificial intelligence methods are increasingly leveraged in clinical predictive modeling to provide clinical decision support systems to physicians. Modern ML approaches such as artificial neural networks (ANNs) and tree boosting often perform better than mor...

Descripción completa

Detalles Bibliográficos
Autores principales: Zihni, Esra, Madai, Vince Istvan, Livne, Michelle, Galinovic, Ivana, Khalil, Ahmed A., Fiebach, Jochen B., Frey, Dietmar
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7135268/
https://www.ncbi.nlm.nih.gov/pubmed/32251471
http://dx.doi.org/10.1371/journal.pone.0231166
_version_ 1783518017453293568
author Zihni, Esra
Madai, Vince Istvan
Livne, Michelle
Galinovic, Ivana
Khalil, Ahmed A.
Fiebach, Jochen B.
Frey, Dietmar
author_facet Zihni, Esra
Madai, Vince Istvan
Livne, Michelle
Galinovic, Ivana
Khalil, Ahmed A.
Fiebach, Jochen B.
Frey, Dietmar
author_sort Zihni, Esra
collection PubMed
description State-of-the-art machine learning (ML) artificial intelligence methods are increasingly leveraged in clinical predictive modeling to provide clinical decision support systems to physicians. Modern ML approaches such as artificial neural networks (ANNs) and tree boosting often perform better than more traditional methods like logistic regression. On the other hand, these modern methods yield a limited understanding of the resulting predictions. However, in the medical domain, understanding of applied models is essential, in particular, when informing clinical decision support. Thus, in recent years, interpretability methods for modern ML methods have emerged to potentially allow explainable predictions paired with high performance. To our knowledge, we present in this work the first explainability comparison of two modern ML methods, tree boosting and multilayer perceptrons (MLPs), to traditional logistic regression methods using a stroke outcome prediction paradigm. Here, we used clinical features to predict a dichotomized 90 days post-stroke modified Rankin Scale (mRS) score. For interpretability, we evaluated clinical features’ importance with regard to predictions using deep Taylor decomposition for MLP, Shapley values for tree boosting and model coefficients for logistic regression. With regard to performance as measured by Area under the Curve (AUC) values on the test dataset, all models performed comparably: Logistic regression AUCs were 0.83, 0.83, 0.81 for three different regularization schemes; tree boosting AUC was 0.81; MLP AUC was 0.83. Importantly, the interpretability analysis demonstrated consistent results across models by rating age and stroke severity consecutively amongst the most important predictive features. For less important features, some differences were observed between the methods. Our analysis suggests that modern machine learning methods can provide explainability which is compatible with domain knowledge interpretation and traditional method rankings. Future work should focus on replication of these findings in other datasets and further testing of different explainability methods.
format Online
Article
Text
id pubmed-7135268
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-71352682020-04-09 Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome Zihni, Esra Madai, Vince Istvan Livne, Michelle Galinovic, Ivana Khalil, Ahmed A. Fiebach, Jochen B. Frey, Dietmar PLoS One Research Article State-of-the-art machine learning (ML) artificial intelligence methods are increasingly leveraged in clinical predictive modeling to provide clinical decision support systems to physicians. Modern ML approaches such as artificial neural networks (ANNs) and tree boosting often perform better than more traditional methods like logistic regression. On the other hand, these modern methods yield a limited understanding of the resulting predictions. However, in the medical domain, understanding of applied models is essential, in particular, when informing clinical decision support. Thus, in recent years, interpretability methods for modern ML methods have emerged to potentially allow explainable predictions paired with high performance. To our knowledge, we present in this work the first explainability comparison of two modern ML methods, tree boosting and multilayer perceptrons (MLPs), to traditional logistic regression methods using a stroke outcome prediction paradigm. Here, we used clinical features to predict a dichotomized 90 days post-stroke modified Rankin Scale (mRS) score. For interpretability, we evaluated clinical features’ importance with regard to predictions using deep Taylor decomposition for MLP, Shapley values for tree boosting and model coefficients for logistic regression. With regard to performance as measured by Area under the Curve (AUC) values on the test dataset, all models performed comparably: Logistic regression AUCs were 0.83, 0.83, 0.81 for three different regularization schemes; tree boosting AUC was 0.81; MLP AUC was 0.83. Importantly, the interpretability analysis demonstrated consistent results across models by rating age and stroke severity consecutively amongst the most important predictive features. For less important features, some differences were observed between the methods. Our analysis suggests that modern machine learning methods can provide explainability which is compatible with domain knowledge interpretation and traditional method rankings. Future work should focus on replication of these findings in other datasets and further testing of different explainability methods. Public Library of Science 2020-04-06 /pmc/articles/PMC7135268/ /pubmed/32251471 http://dx.doi.org/10.1371/journal.pone.0231166 Text en © 2020 Zihni et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Zihni, Esra
Madai, Vince Istvan
Livne, Michelle
Galinovic, Ivana
Khalil, Ahmed A.
Fiebach, Jochen B.
Frey, Dietmar
Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome
title Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome
title_full Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome
title_fullStr Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome
title_full_unstemmed Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome
title_short Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome
title_sort opening the black box of artificial intelligence for clinical decision support: a study predicting stroke outcome
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7135268/
https://www.ncbi.nlm.nih.gov/pubmed/32251471
http://dx.doi.org/10.1371/journal.pone.0231166
work_keys_str_mv AT zihniesra openingtheblackboxofartificialintelligenceforclinicaldecisionsupportastudypredictingstrokeoutcome
AT madaivinceistvan openingtheblackboxofartificialintelligenceforclinicaldecisionsupportastudypredictingstrokeoutcome
AT livnemichelle openingtheblackboxofartificialintelligenceforclinicaldecisionsupportastudypredictingstrokeoutcome
AT galinovicivana openingtheblackboxofartificialintelligenceforclinicaldecisionsupportastudypredictingstrokeoutcome
AT khalilahmeda openingtheblackboxofartificialintelligenceforclinicaldecisionsupportastudypredictingstrokeoutcome
AT fiebachjochenb openingtheblackboxofartificialintelligenceforclinicaldecisionsupportastudypredictingstrokeoutcome
AT freydietmar openingtheblackboxofartificialintelligenceforclinicaldecisionsupportastudypredictingstrokeoutcome