Cargando…

An interpretable neural network for outcome prediction in traumatic brain injury

BACKGROUND: Traumatic Brain Injury (TBI) is a common condition with potentially severe long-term complications, the prediction of which remains challenging. Machine learning (ML) methods have been used previously to help physicians predict long-term outcomes of TBI so that appropriate treatment plan...

Descripción completa

Detalles Bibliográficos
Autores principales: Minoccheri, Cristian, Williamson, Craig A., Hemmila, Mark, Ward, Kevin, Stein, Erica B., Gryak, Jonathan, Najarian, Kayvan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9341077/
https://www.ncbi.nlm.nih.gov/pubmed/35915430
http://dx.doi.org/10.1186/s12911-022-01953-z
_version_ 1784760536184913920
author Minoccheri, Cristian
Williamson, Craig A.
Hemmila, Mark
Ward, Kevin
Stein, Erica B.
Gryak, Jonathan
Najarian, Kayvan
author_facet Minoccheri, Cristian
Williamson, Craig A.
Hemmila, Mark
Ward, Kevin
Stein, Erica B.
Gryak, Jonathan
Najarian, Kayvan
author_sort Minoccheri, Cristian
collection PubMed
description BACKGROUND: Traumatic Brain Injury (TBI) is a common condition with potentially severe long-term complications, the prediction of which remains challenging. Machine learning (ML) methods have been used previously to help physicians predict long-term outcomes of TBI so that appropriate treatment plans can be adopted. However, many ML techniques are “black box”: it is difficult for humans to understand the decisions made by the model, with post-hoc explanations only identifying isolated relevant factors rather than combinations of factors. Moreover, such models often rely on many variables, some of which might not be available at the time of hospitalization. METHODS: In this study, we apply an interpretable neural network model based on tropical geometry to predict unfavorable outcomes at six months from hospitalization in TBI patients, based on information available at the time of admission. RESULTS: The proposed method is compared to established machine learning methods—XGBoost, Random Forest, and SVM—achieving comparable performance in terms of area under the receiver operating characteristic curve (AUC)—0.799 for the proposed method vs. 0.810 for the best black box model. Moreover, the proposed method allows for the extraction of simple, human-understandable rules that explain the model’s predictions and can be used as general guidelines by clinicians to inform treatment decisions. CONCLUSIONS: The classification results for the proposed model are comparable with those of traditional ML methods. However, our model is interpretable, and it allows the extraction of intelligible rules. These rules can be used to determine relevant factors in assessing TBI outcomes and can be used in situations when not all necessary factors are known to inform the full model’s decision.
format Online
Article
Text
id pubmed-9341077
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-93410772022-08-02 An interpretable neural network for outcome prediction in traumatic brain injury Minoccheri, Cristian Williamson, Craig A. Hemmila, Mark Ward, Kevin Stein, Erica B. Gryak, Jonathan Najarian, Kayvan BMC Med Inform Decis Mak Research BACKGROUND: Traumatic Brain Injury (TBI) is a common condition with potentially severe long-term complications, the prediction of which remains challenging. Machine learning (ML) methods have been used previously to help physicians predict long-term outcomes of TBI so that appropriate treatment plans can be adopted. However, many ML techniques are “black box”: it is difficult for humans to understand the decisions made by the model, with post-hoc explanations only identifying isolated relevant factors rather than combinations of factors. Moreover, such models often rely on many variables, some of which might not be available at the time of hospitalization. METHODS: In this study, we apply an interpretable neural network model based on tropical geometry to predict unfavorable outcomes at six months from hospitalization in TBI patients, based on information available at the time of admission. RESULTS: The proposed method is compared to established machine learning methods—XGBoost, Random Forest, and SVM—achieving comparable performance in terms of area under the receiver operating characteristic curve (AUC)—0.799 for the proposed method vs. 0.810 for the best black box model. Moreover, the proposed method allows for the extraction of simple, human-understandable rules that explain the model’s predictions and can be used as general guidelines by clinicians to inform treatment decisions. CONCLUSIONS: The classification results for the proposed model are comparable with those of traditional ML methods. However, our model is interpretable, and it allows the extraction of intelligible rules. These rules can be used to determine relevant factors in assessing TBI outcomes and can be used in situations when not all necessary factors are known to inform the full model’s decision. BioMed Central 2022-08-01 /pmc/articles/PMC9341077/ /pubmed/35915430 http://dx.doi.org/10.1186/s12911-022-01953-z Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Minoccheri, Cristian
Williamson, Craig A.
Hemmila, Mark
Ward, Kevin
Stein, Erica B.
Gryak, Jonathan
Najarian, Kayvan
An interpretable neural network for outcome prediction in traumatic brain injury
title An interpretable neural network for outcome prediction in traumatic brain injury
title_full An interpretable neural network for outcome prediction in traumatic brain injury
title_fullStr An interpretable neural network for outcome prediction in traumatic brain injury
title_full_unstemmed An interpretable neural network for outcome prediction in traumatic brain injury
title_short An interpretable neural network for outcome prediction in traumatic brain injury
title_sort interpretable neural network for outcome prediction in traumatic brain injury
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9341077/
https://www.ncbi.nlm.nih.gov/pubmed/35915430
http://dx.doi.org/10.1186/s12911-022-01953-z
work_keys_str_mv AT minocchericristian aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT williamsoncraiga aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT hemmilamark aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT wardkevin aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT steinericab aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT gryakjonathan aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT najariankayvan aninterpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT minocchericristian interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT williamsoncraiga interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT hemmilamark interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT wardkevin interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT steinericab interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT gryakjonathan interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury
AT najariankayvan interpretableneuralnetworkforoutcomepredictionintraumaticbraininjury