Cargando…

Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care

Neurocritical care patients are a complex patient population, and to aid clinical decision-making, many models and scoring systems have previously been developed. More recently, techniques from the field of machine learning have been applied to neurocritical care patient data to develop models with...

Descripción completa

Detalles Bibliográficos
Autores principales: Moss, Laura, Corsar, David, Shaw, Martin, Piper, Ian, Hawthorne, Christopher
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9343258/
https://www.ncbi.nlm.nih.gov/pubmed/35523917
http://dx.doi.org/10.1007/s12028-022-01504-4
_version_ 1784760972976586752
author Moss, Laura
Corsar, David
Shaw, Martin
Piper, Ian
Hawthorne, Christopher
author_facet Moss, Laura
Corsar, David
Shaw, Martin
Piper, Ian
Hawthorne, Christopher
author_sort Moss, Laura
collection PubMed
description Neurocritical care patients are a complex patient population, and to aid clinical decision-making, many models and scoring systems have previously been developed. More recently, techniques from the field of machine learning have been applied to neurocritical care patient data to develop models with high levels of predictive accuracy. However, although these recent models appear clinically promising, their interpretability has often not been considered and they tend to be black box models, making it extremely difficult to understand how the model came to its conclusion. Interpretable machine learning methods have the potential to provide the means to overcome some of these issues but are largely unexplored within the neurocritical care domain. This article examines existing models used in neurocritical care from the perspective of interpretability. Further, the use of interpretable machine learning will be explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neurocritical care data. Finding a solution to the lack of model explanation, transparency, and accountability is important because these issues have the potential to contribute to model trust and clinical acceptance, and, increasingly, regulation is stipulating a right to explanation for decisions made by models and algorithms. To ensure that the prospective gains from sophisticated predictive models to neurocritical care provision can be realized, it is imperative that interpretability of these models is fully considered.
format Online
Article
Text
id pubmed-9343258
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-93432582022-08-03 Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care Moss, Laura Corsar, David Shaw, Martin Piper, Ian Hawthorne, Christopher Neurocrit Care Big Data in Neurocritical Care Neurocritical care patients are a complex patient population, and to aid clinical decision-making, many models and scoring systems have previously been developed. More recently, techniques from the field of machine learning have been applied to neurocritical care patient data to develop models with high levels of predictive accuracy. However, although these recent models appear clinically promising, their interpretability has often not been considered and they tend to be black box models, making it extremely difficult to understand how the model came to its conclusion. Interpretable machine learning methods have the potential to provide the means to overcome some of these issues but are largely unexplored within the neurocritical care domain. This article examines existing models used in neurocritical care from the perspective of interpretability. Further, the use of interpretable machine learning will be explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neurocritical care data. Finding a solution to the lack of model explanation, transparency, and accountability is important because these issues have the potential to contribute to model trust and clinical acceptance, and, increasingly, regulation is stipulating a right to explanation for decisions made by models and algorithms. To ensure that the prospective gains from sophisticated predictive models to neurocritical care provision can be realized, it is imperative that interpretability of these models is fully considered. Springer US 2022-05-06 2022 /pmc/articles/PMC9343258/ /pubmed/35523917 http://dx.doi.org/10.1007/s12028-022-01504-4 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Big Data in Neurocritical Care
Moss, Laura
Corsar, David
Shaw, Martin
Piper, Ian
Hawthorne, Christopher
Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care
title Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care
title_full Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care
title_fullStr Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care
title_full_unstemmed Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care
title_short Demystifying the Black Box: The Importance of Interpretability of Predictive Models in Neurocritical Care
title_sort demystifying the black box: the importance of interpretability of predictive models in neurocritical care
topic Big Data in Neurocritical Care
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9343258/
https://www.ncbi.nlm.nih.gov/pubmed/35523917
http://dx.doi.org/10.1007/s12028-022-01504-4
work_keys_str_mv AT mosslaura demystifyingtheblackboxtheimportanceofinterpretabilityofpredictivemodelsinneurocriticalcare
AT corsardavid demystifyingtheblackboxtheimportanceofinterpretabilityofpredictivemodelsinneurocriticalcare
AT shawmartin demystifyingtheblackboxtheimportanceofinterpretabilityofpredictivemodelsinneurocriticalcare
AT piperian demystifyingtheblackboxtheimportanceofinterpretabilityofpredictivemodelsinneurocriticalcare
AT hawthornechristopher demystifyingtheblackboxtheimportanceofinterpretabilityofpredictivemodelsinneurocriticalcare