Cargando…
Evaluation of variable selection methods for random forests and omics data sets
Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, o...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Oxford University Press
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6433899/ https://www.ncbi.nlm.nih.gov/pubmed/29045534 http://dx.doi.org/10.1093/bib/bbx124 |
_version_ | 1783406367595298816 |
---|---|
author | Degenhardt, Frauke Seifert, Stephan Szymczak, Silke |
author_facet | Degenhardt, Frauke Seifert, Stephan Szymczak, Silke |
author_sort | Degenhardt, Frauke |
collection | PubMed |
description | Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta. In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. |
format | Online Article Text |
id | pubmed-6433899 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | Oxford University Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-64338992019-04-01 Evaluation of variable selection methods for random forests and omics data sets Degenhardt, Frauke Seifert, Stephan Szymczak, Silke Brief Bioinform Paper Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta. In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. Oxford University Press 2017-10-16 /pmc/articles/PMC6433899/ /pubmed/29045534 http://dx.doi.org/10.1093/bib/bbx124 Text en © The Author 2017. Published by Oxford University Press. http://creativecommons.org/licenses/by/4.0/ This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Paper Degenhardt, Frauke Seifert, Stephan Szymczak, Silke Evaluation of variable selection methods for random forests and omics data sets |
title | Evaluation of variable selection methods for random forests and omics data sets |
title_full | Evaluation of variable selection methods for random forests and omics data sets |
title_fullStr | Evaluation of variable selection methods for random forests and omics data sets |
title_full_unstemmed | Evaluation of variable selection methods for random forests and omics data sets |
title_short | Evaluation of variable selection methods for random forests and omics data sets |
title_sort | evaluation of variable selection methods for random forests and omics data sets |
topic | Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6433899/ https://www.ncbi.nlm.nih.gov/pubmed/29045534 http://dx.doi.org/10.1093/bib/bbx124 |
work_keys_str_mv | AT degenhardtfrauke evaluationofvariableselectionmethodsforrandomforestsandomicsdatasets AT seifertstephan evaluationofvariableselectionmethodsforrandomforestsandomicsdatasets AT szymczaksilke evaluationofvariableselectionmethodsforrandomforestsandomicsdatasets |