Cargando…

Robustness of Random Forest-based gene selection methods

BACKGROUND: Gene selection is an important part of microarray data analysis because it provides information that can lead to a better mechanistic understanding of an investigated phenomenon. At the same time, gene selection is very difficult because of the noisy nature of microarray data. As a conse...

Descripción completa

Detalles Bibliográficos
Autor principal: Kursa, Miron Bartosz
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2014
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3897925/
https://www.ncbi.nlm.nih.gov/pubmed/24410865
http://dx.doi.org/10.1186/1471-2105-15-8
_version_ 1782300323819290624
author Kursa, Miron Bartosz
author_facet Kursa, Miron Bartosz
author_sort Kursa, Miron Bartosz
collection PubMed
description BACKGROUND: Gene selection is an important part of microarray data analysis because it provides information that can lead to a better mechanistic understanding of an investigated phenomenon. At the same time, gene selection is very difficult because of the noisy nature of microarray data. As a consequence, gene selection is often performed with machine learning methods. The Random Forest method is particularly well suited for this purpose. In this work, four state-of-the-art Random Forest-based feature selection methods were compared in a gene selection context. The analysis focused on the stability of selection because, although it is necessary for determining the significance of results, it is often ignored in similar studies. RESULTS: The comparison of post-selection accuracy of a validation of Random Forest classifiers revealed that all investigated methods were equivalent in this context. However, the methods substantially differed with respect to the number of selected genes and the stability of selection. Of the analysed methods, the Boruta algorithm predicted the most genes as potentially important. CONCLUSIONS: The post-selection classifier error rate, which is a frequently used measure, was found to be a potentially deceptive measure of gene selection quality. When the number of consistently selected genes was considered, the Boruta algorithm was clearly the best. Although it was also the most computationally intensive method, the Boruta algorithm’s computational demands could be reduced to levels comparable to those of other algorithms by replacing the Random Forest importance with a comparable measure from Random Ferns (a similar but simplified classifier). Despite their design assumptions, the minimal optimal selection methods, were found to select a high fraction of false positives.
format Online
Article
Text
id pubmed-3897925
institution National Center for Biotechnology Information
language English
publishDate 2014
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-38979252014-01-23 Robustness of Random Forest-based gene selection methods Kursa, Miron Bartosz BMC Bioinformatics Research Article BACKGROUND: Gene selection is an important part of microarray data analysis because it provides information that can lead to a better mechanistic understanding of an investigated phenomenon. At the same time, gene selection is very difficult because of the noisy nature of microarray data. As a consequence, gene selection is often performed with machine learning methods. The Random Forest method is particularly well suited for this purpose. In this work, four state-of-the-art Random Forest-based feature selection methods were compared in a gene selection context. The analysis focused on the stability of selection because, although it is necessary for determining the significance of results, it is often ignored in similar studies. RESULTS: The comparison of post-selection accuracy of a validation of Random Forest classifiers revealed that all investigated methods were equivalent in this context. However, the methods substantially differed with respect to the number of selected genes and the stability of selection. Of the analysed methods, the Boruta algorithm predicted the most genes as potentially important. CONCLUSIONS: The post-selection classifier error rate, which is a frequently used measure, was found to be a potentially deceptive measure of gene selection quality. When the number of consistently selected genes was considered, the Boruta algorithm was clearly the best. Although it was also the most computationally intensive method, the Boruta algorithm’s computational demands could be reduced to levels comparable to those of other algorithms by replacing the Random Forest importance with a comparable measure from Random Ferns (a similar but simplified classifier). Despite their design assumptions, the minimal optimal selection methods, were found to select a high fraction of false positives. BioMed Central 2014-01-13 /pmc/articles/PMC3897925/ /pubmed/24410865 http://dx.doi.org/10.1186/1471-2105-15-8 Text en Copyright © 2014 Kursa; licensee BioMed Central Ltd. http://creativecommons.org/licenses/by/2.0 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Kursa, Miron Bartosz
Robustness of Random Forest-based gene selection methods
title Robustness of Random Forest-based gene selection methods
title_full Robustness of Random Forest-based gene selection methods
title_fullStr Robustness of Random Forest-based gene selection methods
title_full_unstemmed Robustness of Random Forest-based gene selection methods
title_short Robustness of Random Forest-based gene selection methods
title_sort robustness of random forest-based gene selection methods
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3897925/
https://www.ncbi.nlm.nih.gov/pubmed/24410865
http://dx.doi.org/10.1186/1471-2105-15-8
work_keys_str_mv AT kursamironbartosz robustnessofrandomforestbasedgeneselectionmethods