Cargando…

Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction

BACKGROUND: Statistical boosting is a computational approach to select and estimate interpretable prediction models for high-dimensional biomedical data, leading to implicit regularization and variable selection when combined with early stopping. Traditionally, the set of base-learners is fixed for...

Descripción completa

Detalles Bibliográficos
Autores principales: Staerk, Christian, Mayr, Andreas
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8447543/
https://www.ncbi.nlm.nih.gov/pubmed/34530737
http://dx.doi.org/10.1186/s12859-021-04340-z
_version_ 1784569037593772032
author Staerk, Christian
Mayr, Andreas
author_facet Staerk, Christian
Mayr, Andreas
author_sort Staerk, Christian
collection PubMed
description BACKGROUND: Statistical boosting is a computational approach to select and estimate interpretable prediction models for high-dimensional biomedical data, leading to implicit regularization and variable selection when combined with early stopping. Traditionally, the set of base-learners is fixed for all iterations and consists of simple regression learners including only one predictor variable at a time. Furthermore, the number of iterations is typically tuned by optimizing the predictive performance, leading to models which often include unnecessarily large numbers of noise variables. RESULTS: We propose three consecutive extensions of classical component-wise gradient boosting. In the first extension, called Subspace Boosting (SubBoost), base-learners can consist of several variables, allowing for multivariable updates in a single iteration. To compensate for the larger flexibility, the ultimate selection of base-learners is based on information criteria leading to an automatic stopping of the algorithm. As the second extension, Random Subspace Boosting (RSubBoost) additionally includes a random preselection of base-learners in each iteration, enabling the scalability to high-dimensional data. In a third extension, called Adaptive Subspace Boosting (AdaSubBoost), an adaptive random preselection of base-learners is considered, focusing on base-learners which have proven to be predictive in previous iterations. Simulation results show that the multivariable updates in the three subspace algorithms are particularly beneficial in cases of high correlations among signal covariates. In several biomedical applications the proposed algorithms tend to yield sparser models than classical statistical boosting, while showing a very competitive predictive performance also compared to penalized regression approaches like the (relaxed) lasso and the elastic net. CONCLUSIONS: The proposed randomized boosting approaches with multivariable base-learners are promising extensions of statistical boosting, particularly suited for highly-correlated and sparse high-dimensional settings. The incorporated selection of base-learners via information criteria induces automatic stopping of the algorithms, promoting sparser and more interpretable prediction models. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12859-021-04340-z.
format Online
Article
Text
id pubmed-8447543
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-84475432021-09-17 Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction Staerk, Christian Mayr, Andreas BMC Bioinformatics Methodology Article BACKGROUND: Statistical boosting is a computational approach to select and estimate interpretable prediction models for high-dimensional biomedical data, leading to implicit regularization and variable selection when combined with early stopping. Traditionally, the set of base-learners is fixed for all iterations and consists of simple regression learners including only one predictor variable at a time. Furthermore, the number of iterations is typically tuned by optimizing the predictive performance, leading to models which often include unnecessarily large numbers of noise variables. RESULTS: We propose three consecutive extensions of classical component-wise gradient boosting. In the first extension, called Subspace Boosting (SubBoost), base-learners can consist of several variables, allowing for multivariable updates in a single iteration. To compensate for the larger flexibility, the ultimate selection of base-learners is based on information criteria leading to an automatic stopping of the algorithm. As the second extension, Random Subspace Boosting (RSubBoost) additionally includes a random preselection of base-learners in each iteration, enabling the scalability to high-dimensional data. In a third extension, called Adaptive Subspace Boosting (AdaSubBoost), an adaptive random preselection of base-learners is considered, focusing on base-learners which have proven to be predictive in previous iterations. Simulation results show that the multivariable updates in the three subspace algorithms are particularly beneficial in cases of high correlations among signal covariates. In several biomedical applications the proposed algorithms tend to yield sparser models than classical statistical boosting, while showing a very competitive predictive performance also compared to penalized regression approaches like the (relaxed) lasso and the elastic net. CONCLUSIONS: The proposed randomized boosting approaches with multivariable base-learners are promising extensions of statistical boosting, particularly suited for highly-correlated and sparse high-dimensional settings. The incorporated selection of base-learners via information criteria induces automatic stopping of the algorithms, promoting sparser and more interpretable prediction models. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12859-021-04340-z. BioMed Central 2021-09-16 /pmc/articles/PMC8447543/ /pubmed/34530737 http://dx.doi.org/10.1186/s12859-021-04340-z Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Methodology Article
Staerk, Christian
Mayr, Andreas
Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction
title Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction
title_full Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction
title_fullStr Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction
title_full_unstemmed Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction
title_short Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction
title_sort randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction
topic Methodology Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8447543/
https://www.ncbi.nlm.nih.gov/pubmed/34530737
http://dx.doi.org/10.1186/s12859-021-04340-z
work_keys_str_mv AT staerkchristian randomizedboostingwithmultivariablebaselearnersforhighdimensionalvariableselectionandprediction
AT mayrandreas randomizedboostingwithmultivariablebaselearnersforhighdimensionalvariableselectionandprediction