Cargando…
Assessing the effects of data drift on the performance of machine learning models used in clinical sepsis prediction
BACKGROUND: Data drift can negatively impact the performance of machine learning algorithms (MLAs) that were trained on historical data. As such, MLAs should be continuously monitored and tuned to overcome the systematic changes that occur in the distribution of data. In this paper, we study the ext...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cold Spring Harbor Laboratory
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9196120/ https://www.ncbi.nlm.nih.gov/pubmed/35702157 http://dx.doi.org/10.1101/2022.06.06.22276062 |
_version_ | 1784727112827011072 |
---|---|
author | Rahmani, Keyvan Thapa, Rahul Tsou, Peiling Chetty, Satish Casie Barnes, Gina Lam, Carson Tso, Chak Foon |
author_facet | Rahmani, Keyvan Thapa, Rahul Tsou, Peiling Chetty, Satish Casie Barnes, Gina Lam, Carson Tso, Chak Foon |
author_sort | Rahmani, Keyvan |
collection | PubMed |
description | BACKGROUND: Data drift can negatively impact the performance of machine learning algorithms (MLAs) that were trained on historical data. As such, MLAs should be continuously monitored and tuned to overcome the systematic changes that occur in the distribution of data. In this paper, we study the extent of data drift and provide insights about its characteristics for sepsis onset prediction. This study will help elucidate the nature of data drift for prediction of sepsis and similar diseases. This may aid with the development of more effective patient monitoring systems that can stratify risk for dynamic disease states in hospitals. METHODS: We devise a series of simulations that measure the effects of data drift in patients with sepsis. We simulate multiple scenarios in which data drift may occur, namely the change in the distribution of the predictor variables (covariate shift), the change in the statistical relationship between the predictors and the target (concept shift), and the occurrence of a major healthcare event (major event) such as the COVID-19 pandemic. We measure the impact of data drift on model performances, identify the circumstances that necessitate model retraining, and compare the effects of different retraining methodologies and model architecture on the outcomes. We present the results for two different MLAs, eXtreme Gradient Boosting (XGB) and Recurrent Neural Network (RNN). RESULTS: Our results show that the properly retrained XGB models outperform the baseline models in all simulation scenarios, hence signifying the existence of data drift. In the major event scenario, the area under the receiver operating characteristic curve (AUROC) at the end of the simulation period is 0.811 for the baseline XGB model and 0.868 for the retrained XGB model. In the covariate shift scenario, the AUROC at the end of the simulation period for the baseline and retrained XGB models is 0.853 and 0.874 respectively. In the concept shift scenario and under the mixed labeling method, the retrained XGB models perform worse than the baseline model for most simulation steps. However, under the full relabeling method, the AUROC at the end of the simulation period for the baseline and retrained XGB models is 0.852 and 0.877 respectively. The results for the RNN models were mixed, suggesting that retraining based on a fixed network architecture may be inadequate for an RNN. We also present the results in the form of other performance metrics such as the ratio of observed to expected probabilities (calibration) and the normalized rate of positive predictive values (PPV) by prevalence, referred to as lift, at a sensitivity of 0.8. CONCLUSION: Our simulations reveal that retraining periods of a couple of months or using several thousand patients are likely to be adequate to monitor machine learning models that predict sepsis. This indicates that a machine learning system for sepsis prediction will probably need less infrastructure for performance monitoring and retraining compared to other applications in which data drift is more frequent and continuous. Our results also show that in the event of a concept shift, a full overhaul of the sepsis prediction model may be necessary because it indicates a discrete change in the definition of sepsis labels, and mixing the labels for the sake of incremental training may not produce the desired results. |
format | Online Article Text |
id | pubmed-9196120 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Cold Spring Harbor Laboratory |
record_format | MEDLINE/PubMed |
spelling | pubmed-91961202022-06-15 Assessing the effects of data drift on the performance of machine learning models used in clinical sepsis prediction Rahmani, Keyvan Thapa, Rahul Tsou, Peiling Chetty, Satish Casie Barnes, Gina Lam, Carson Tso, Chak Foon medRxiv Article BACKGROUND: Data drift can negatively impact the performance of machine learning algorithms (MLAs) that were trained on historical data. As such, MLAs should be continuously monitored and tuned to overcome the systematic changes that occur in the distribution of data. In this paper, we study the extent of data drift and provide insights about its characteristics for sepsis onset prediction. This study will help elucidate the nature of data drift for prediction of sepsis and similar diseases. This may aid with the development of more effective patient monitoring systems that can stratify risk for dynamic disease states in hospitals. METHODS: We devise a series of simulations that measure the effects of data drift in patients with sepsis. We simulate multiple scenarios in which data drift may occur, namely the change in the distribution of the predictor variables (covariate shift), the change in the statistical relationship between the predictors and the target (concept shift), and the occurrence of a major healthcare event (major event) such as the COVID-19 pandemic. We measure the impact of data drift on model performances, identify the circumstances that necessitate model retraining, and compare the effects of different retraining methodologies and model architecture on the outcomes. We present the results for two different MLAs, eXtreme Gradient Boosting (XGB) and Recurrent Neural Network (RNN). RESULTS: Our results show that the properly retrained XGB models outperform the baseline models in all simulation scenarios, hence signifying the existence of data drift. In the major event scenario, the area under the receiver operating characteristic curve (AUROC) at the end of the simulation period is 0.811 for the baseline XGB model and 0.868 for the retrained XGB model. In the covariate shift scenario, the AUROC at the end of the simulation period for the baseline and retrained XGB models is 0.853 and 0.874 respectively. In the concept shift scenario and under the mixed labeling method, the retrained XGB models perform worse than the baseline model for most simulation steps. However, under the full relabeling method, the AUROC at the end of the simulation period for the baseline and retrained XGB models is 0.852 and 0.877 respectively. The results for the RNN models were mixed, suggesting that retraining based on a fixed network architecture may be inadequate for an RNN. We also present the results in the form of other performance metrics such as the ratio of observed to expected probabilities (calibration) and the normalized rate of positive predictive values (PPV) by prevalence, referred to as lift, at a sensitivity of 0.8. CONCLUSION: Our simulations reveal that retraining periods of a couple of months or using several thousand patients are likely to be adequate to monitor machine learning models that predict sepsis. This indicates that a machine learning system for sepsis prediction will probably need less infrastructure for performance monitoring and retraining compared to other applications in which data drift is more frequent and continuous. Our results also show that in the event of a concept shift, a full overhaul of the sepsis prediction model may be necessary because it indicates a discrete change in the definition of sepsis labels, and mixing the labels for the sake of incremental training may not produce the desired results. Cold Spring Harbor Laboratory 2022-06-07 /pmc/articles/PMC9196120/ /pubmed/35702157 http://dx.doi.org/10.1101/2022.06.06.22276062 Text en https://creativecommons.org/licenses/by-nc-nd/4.0/This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (https://creativecommons.org/licenses/by-nc-nd/4.0/) , which allows reusers to copy and distribute the material in any medium or format in unadapted form only, for noncommercial purposes only, and only so long as attribution is given to the creator. |
spellingShingle | Article Rahmani, Keyvan Thapa, Rahul Tsou, Peiling Chetty, Satish Casie Barnes, Gina Lam, Carson Tso, Chak Foon Assessing the effects of data drift on the performance of machine learning models used in clinical sepsis prediction |
title | Assessing the effects of data drift on the performance of machine learning models used in clinical sepsis prediction |
title_full | Assessing the effects of data drift on the performance of machine learning models used in clinical sepsis prediction |
title_fullStr | Assessing the effects of data drift on the performance of machine learning models used in clinical sepsis prediction |
title_full_unstemmed | Assessing the effects of data drift on the performance of machine learning models used in clinical sepsis prediction |
title_short | Assessing the effects of data drift on the performance of machine learning models used in clinical sepsis prediction |
title_sort | assessing the effects of data drift on the performance of machine learning models used in clinical sepsis prediction |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9196120/ https://www.ncbi.nlm.nih.gov/pubmed/35702157 http://dx.doi.org/10.1101/2022.06.06.22276062 |
work_keys_str_mv | AT rahmanikeyvan assessingtheeffectsofdatadriftontheperformanceofmachinelearningmodelsusedinclinicalsepsisprediction AT thaparahul assessingtheeffectsofdatadriftontheperformanceofmachinelearningmodelsusedinclinicalsepsisprediction AT tsoupeiling assessingtheeffectsofdatadriftontheperformanceofmachinelearningmodelsusedinclinicalsepsisprediction AT chettysatishcasie assessingtheeffectsofdatadriftontheperformanceofmachinelearningmodelsusedinclinicalsepsisprediction AT barnesgina assessingtheeffectsofdatadriftontheperformanceofmachinelearningmodelsusedinclinicalsepsisprediction AT lamcarson assessingtheeffectsofdatadriftontheperformanceofmachinelearningmodelsusedinclinicalsepsisprediction AT tsochakfoon assessingtheeffectsofdatadriftontheperformanceofmachinelearningmodelsusedinclinicalsepsisprediction |