Cargando…
Statistical models versus machine learning for competing risks: development and validation of prognostic models
BACKGROUND: In health research, several chronic diseases are susceptible to competing risks (CRs). Initially, statistical models (SM) were developed to estimate the cumulative incidence of an event in the presence of CRs. As recently there is a growing interest in applying machine learning (ML) for...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9951458/ https://www.ncbi.nlm.nih.gov/pubmed/36829145 http://dx.doi.org/10.1186/s12874-023-01866-z |
_version_ | 1784893393102438400 |
---|---|
author | Kantidakis, Georgios Putter, Hein Litière, Saskia Fiocco, Marta |
author_facet | Kantidakis, Georgios Putter, Hein Litière, Saskia Fiocco, Marta |
author_sort | Kantidakis, Georgios |
collection | PubMed |
description | BACKGROUND: In health research, several chronic diseases are susceptible to competing risks (CRs). Initially, statistical models (SM) were developed to estimate the cumulative incidence of an event in the presence of CRs. As recently there is a growing interest in applying machine learning (ML) for clinical prediction, these techniques have also been extended to model CRs but literature is limited. Here, our aim is to investigate the potential role of ML versus SM for CRs within non-complex data (small/medium sample size, low dimensional setting). METHODS: A dataset with 3826 retrospectively collected patients with extremity soft-tissue sarcoma (eSTS) and nine predictors is used to evaluate model-predictive performance in terms of discrimination and calibration. Two SM (cause-specific Cox, Fine-Gray) and three ML techniques are compared for CRs in a simple clinical setting. ML models include an original partial logistic artificial neural network for CRs (PLANNCR original), a PLANNCR with novel specifications in terms of architecture (PLANNCR extended), and a random survival forest for CRs (RSFCR). The clinical endpoint is the time in years between surgery and disease progression (event of interest) or death (competing event). Time points of interest are 2, 5, and 10 years. RESULTS: Based on the original eSTS data, 100 bootstrapped training datasets are drawn. Performance of the final models is assessed on validation data (left out samples) by employing as measures the Brier score and the Area Under the Curve (AUC) with CRs. Miscalibration (absolute accuracy error) is also estimated. Results show that the ML models are able to reach a comparable performance versus the SM at 2, 5, and 10 years regarding both Brier score and AUC (95% confidence intervals overlapped). However, the SM are frequently better calibrated. CONCLUSIONS: Overall, ML techniques are less practical as they require substantial implementation time (data preprocessing, hyperparameter tuning, computational intensity), whereas regression methods can perform well without the additional workload of model training. As such, for non-complex real life survival data, these techniques should only be applied complementary to SM as exploratory tools of model’s performance. More attention to model calibration is urgently needed. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-023-01866-z. |
format | Online Article Text |
id | pubmed-9951458 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-99514582023-02-25 Statistical models versus machine learning for competing risks: development and validation of prognostic models Kantidakis, Georgios Putter, Hein Litière, Saskia Fiocco, Marta BMC Med Res Methodol Research BACKGROUND: In health research, several chronic diseases are susceptible to competing risks (CRs). Initially, statistical models (SM) were developed to estimate the cumulative incidence of an event in the presence of CRs. As recently there is a growing interest in applying machine learning (ML) for clinical prediction, these techniques have also been extended to model CRs but literature is limited. Here, our aim is to investigate the potential role of ML versus SM for CRs within non-complex data (small/medium sample size, low dimensional setting). METHODS: A dataset with 3826 retrospectively collected patients with extremity soft-tissue sarcoma (eSTS) and nine predictors is used to evaluate model-predictive performance in terms of discrimination and calibration. Two SM (cause-specific Cox, Fine-Gray) and three ML techniques are compared for CRs in a simple clinical setting. ML models include an original partial logistic artificial neural network for CRs (PLANNCR original), a PLANNCR with novel specifications in terms of architecture (PLANNCR extended), and a random survival forest for CRs (RSFCR). The clinical endpoint is the time in years between surgery and disease progression (event of interest) or death (competing event). Time points of interest are 2, 5, and 10 years. RESULTS: Based on the original eSTS data, 100 bootstrapped training datasets are drawn. Performance of the final models is assessed on validation data (left out samples) by employing as measures the Brier score and the Area Under the Curve (AUC) with CRs. Miscalibration (absolute accuracy error) is also estimated. Results show that the ML models are able to reach a comparable performance versus the SM at 2, 5, and 10 years regarding both Brier score and AUC (95% confidence intervals overlapped). However, the SM are frequently better calibrated. CONCLUSIONS: Overall, ML techniques are less practical as they require substantial implementation time (data preprocessing, hyperparameter tuning, computational intensity), whereas regression methods can perform well without the additional workload of model training. As such, for non-complex real life survival data, these techniques should only be applied complementary to SM as exploratory tools of model’s performance. More attention to model calibration is urgently needed. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-023-01866-z. BioMed Central 2023-02-24 /pmc/articles/PMC9951458/ /pubmed/36829145 http://dx.doi.org/10.1186/s12874-023-01866-z Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Kantidakis, Georgios Putter, Hein Litière, Saskia Fiocco, Marta Statistical models versus machine learning for competing risks: development and validation of prognostic models |
title | Statistical models versus machine learning for competing risks: development and validation of prognostic models |
title_full | Statistical models versus machine learning for competing risks: development and validation of prognostic models |
title_fullStr | Statistical models versus machine learning for competing risks: development and validation of prognostic models |
title_full_unstemmed | Statistical models versus machine learning for competing risks: development and validation of prognostic models |
title_short | Statistical models versus machine learning for competing risks: development and validation of prognostic models |
title_sort | statistical models versus machine learning for competing risks: development and validation of prognostic models |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9951458/ https://www.ncbi.nlm.nih.gov/pubmed/36829145 http://dx.doi.org/10.1186/s12874-023-01866-z |
work_keys_str_mv | AT kantidakisgeorgios statisticalmodelsversusmachinelearningforcompetingrisksdevelopmentandvalidationofprognosticmodels AT putterhein statisticalmodelsversusmachinelearningforcompetingrisksdevelopmentandvalidationofprognosticmodels AT litieresaskia statisticalmodelsversusmachinelearningforcompetingrisksdevelopmentandvalidationofprognosticmodels AT fioccomarta statisticalmodelsversusmachinelearningforcompetingrisksdevelopmentandvalidationofprognosticmodels |