Cargando…

Towards interpretable, medically grounded, EMR-based risk prediction models

Machine-learning based risk prediction models have the potential to improve patient outcomes by assessing risk more accurately than clinicians. Significant additional value lies in these models providing feedback about the factors that amplify an individual patient’s risk. Identification of risk fac...

Descripción completa

Detalles Bibliográficos
Autores principales: Twick, Isabell, Zahavi, Guy, Benvenisti, Haggai, Rubinstein, Ronya, Woods, Michael S., Berkenstadt, Haim, Nissan, Aviram, Hosgor, Enes, Assaf, Dan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9200841/
https://www.ncbi.nlm.nih.gov/pubmed/35705550
http://dx.doi.org/10.1038/s41598-022-13504-7
_version_ 1784728155159789568
author Twick, Isabell
Zahavi, Guy
Benvenisti, Haggai
Rubinstein, Ronya
Woods, Michael S.
Berkenstadt, Haim
Nissan, Aviram
Hosgor, Enes
Assaf, Dan
author_facet Twick, Isabell
Zahavi, Guy
Benvenisti, Haggai
Rubinstein, Ronya
Woods, Michael S.
Berkenstadt, Haim
Nissan, Aviram
Hosgor, Enes
Assaf, Dan
author_sort Twick, Isabell
collection PubMed
description Machine-learning based risk prediction models have the potential to improve patient outcomes by assessing risk more accurately than clinicians. Significant additional value lies in these models providing feedback about the factors that amplify an individual patient’s risk. Identification of risk factors enables more informed decisions on interventions to mitigate or ameliorate modifiable factors. For these reasons, risk prediction models must be explainable and grounded on medical knowledge. Current machine learning-based risk prediction models are frequently ‘black-box’ models whose inner workings cannot be understood easily, making it difficult to define risk drivers. Since machine learning models follow patterns in the data rather than looking for medically relevant relationships, possible risk factors identified by these models do not necessarily translate into actionable insights for clinicians. Here, we use the example of risk assessment for postoperative complications to demonstrate how explainable and medically grounded risk prediction models can be developed. Pre- and postoperative risk prediction models are trained based on clinically relevant inputs extracted from electronic medical record data. We show that these models have similar predictive performance as models that incorporate a wider range of inputs and explain the models’ decision-making process by visualizing how different model inputs and their values affect the models’ predictions.
format Online
Article
Text
id pubmed-9200841
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-92008412022-06-17 Towards interpretable, medically grounded, EMR-based risk prediction models Twick, Isabell Zahavi, Guy Benvenisti, Haggai Rubinstein, Ronya Woods, Michael S. Berkenstadt, Haim Nissan, Aviram Hosgor, Enes Assaf, Dan Sci Rep Article Machine-learning based risk prediction models have the potential to improve patient outcomes by assessing risk more accurately than clinicians. Significant additional value lies in these models providing feedback about the factors that amplify an individual patient’s risk. Identification of risk factors enables more informed decisions on interventions to mitigate or ameliorate modifiable factors. For these reasons, risk prediction models must be explainable and grounded on medical knowledge. Current machine learning-based risk prediction models are frequently ‘black-box’ models whose inner workings cannot be understood easily, making it difficult to define risk drivers. Since machine learning models follow patterns in the data rather than looking for medically relevant relationships, possible risk factors identified by these models do not necessarily translate into actionable insights for clinicians. Here, we use the example of risk assessment for postoperative complications to demonstrate how explainable and medically grounded risk prediction models can be developed. Pre- and postoperative risk prediction models are trained based on clinically relevant inputs extracted from electronic medical record data. We show that these models have similar predictive performance as models that incorporate a wider range of inputs and explain the models’ decision-making process by visualizing how different model inputs and their values affect the models’ predictions. Nature Publishing Group UK 2022-06-15 /pmc/articles/PMC9200841/ /pubmed/35705550 http://dx.doi.org/10.1038/s41598-022-13504-7 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Twick, Isabell
Zahavi, Guy
Benvenisti, Haggai
Rubinstein, Ronya
Woods, Michael S.
Berkenstadt, Haim
Nissan, Aviram
Hosgor, Enes
Assaf, Dan
Towards interpretable, medically grounded, EMR-based risk prediction models
title Towards interpretable, medically grounded, EMR-based risk prediction models
title_full Towards interpretable, medically grounded, EMR-based risk prediction models
title_fullStr Towards interpretable, medically grounded, EMR-based risk prediction models
title_full_unstemmed Towards interpretable, medically grounded, EMR-based risk prediction models
title_short Towards interpretable, medically grounded, EMR-based risk prediction models
title_sort towards interpretable, medically grounded, emr-based risk prediction models
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9200841/
https://www.ncbi.nlm.nih.gov/pubmed/35705550
http://dx.doi.org/10.1038/s41598-022-13504-7
work_keys_str_mv AT twickisabell towardsinterpretablemedicallygroundedemrbasedriskpredictionmodels
AT zahaviguy towardsinterpretablemedicallygroundedemrbasedriskpredictionmodels
AT benvenistihaggai towardsinterpretablemedicallygroundedemrbasedriskpredictionmodels
AT rubinsteinronya towardsinterpretablemedicallygroundedemrbasedriskpredictionmodels
AT woodsmichaels towardsinterpretablemedicallygroundedemrbasedriskpredictionmodels
AT berkenstadthaim towardsinterpretablemedicallygroundedemrbasedriskpredictionmodels
AT nissanaviram towardsinterpretablemedicallygroundedemrbasedriskpredictionmodels
AT hosgorenes towardsinterpretablemedicallygroundedemrbasedriskpredictionmodels
AT assafdan towardsinterpretablemedicallygroundedemrbasedriskpredictionmodels