Cargando…

Boosting ridge for the extreme learning machine globally optimised for classification and regression problems

This paper explores the boosting ridge (BR) framework in the extreme learning machine (ELM) community and presents a novel model that trains the base learners as a global ensemble. In the context of Extreme Learning Machine single-hidden-layer networks, the nodes in the hidden layer are preconfigure...

Descripción completa

Detalles Bibliográficos
Autores principales: Peralez-González, Carlos, Pérez-Rodríguez, Javier, Durán-Rosal, Antonio M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10362034/
https://www.ncbi.nlm.nih.gov/pubmed/37479841
http://dx.doi.org/10.1038/s41598-023-38948-3
_version_ 1785076332889112576
author Peralez-González, Carlos
Pérez-Rodríguez, Javier
Durán-Rosal, Antonio M.
author_facet Peralez-González, Carlos
Pérez-Rodríguez, Javier
Durán-Rosal, Antonio M.
author_sort Peralez-González, Carlos
collection PubMed
description This paper explores the boosting ridge (BR) framework in the extreme learning machine (ELM) community and presents a novel model that trains the base learners as a global ensemble. In the context of Extreme Learning Machine single-hidden-layer networks, the nodes in the hidden layer are preconfigured before training, and the optimisation is performed on the weights in the output layer. The previous implementation of the BR ensemble with ELM (BRELM) as base learners fix the nodes in the hidden layer for all the ELMs. The ensemble learning method generates different output layer coefficients by reducing the residual error of the ensemble sequentially as more base learners are added to the ensemble. As in other ensemble methodologies, base learners are selected until fulfilling ensemble criteria such as size or performance. This paper proposes a global learning method in the BR framework, where base learners are not added step by step, but all are calculated in a single step looking for ensemble performance. This method considers (i) the configurations of the hidden layer are different for each base learner, (ii) the base learners are optimised all at once, not sequentially, thus avoiding saturation, and (iii) the ensemble methodology does not have the disadvantage of working with strong classifiers. Various regression and classification benchmark datasets have been selected to compare this method with the original BRELM implementation and other state-of-the-art algorithms. Particularly, 71 datasets for classification and 52 for regression, have been considered using different metrics and analysing different characteristics of the datasets, such as the size, the number of classes or the imbalanced nature of them. Statistical tests indicate the superiority of the proposed method in both regression and classification problems in all experimental scenarios.
format Online
Article
Text
id pubmed-10362034
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-103620342023-07-23 Boosting ridge for the extreme learning machine globally optimised for classification and regression problems Peralez-González, Carlos Pérez-Rodríguez, Javier Durán-Rosal, Antonio M. Sci Rep Article This paper explores the boosting ridge (BR) framework in the extreme learning machine (ELM) community and presents a novel model that trains the base learners as a global ensemble. In the context of Extreme Learning Machine single-hidden-layer networks, the nodes in the hidden layer are preconfigured before training, and the optimisation is performed on the weights in the output layer. The previous implementation of the BR ensemble with ELM (BRELM) as base learners fix the nodes in the hidden layer for all the ELMs. The ensemble learning method generates different output layer coefficients by reducing the residual error of the ensemble sequentially as more base learners are added to the ensemble. As in other ensemble methodologies, base learners are selected until fulfilling ensemble criteria such as size or performance. This paper proposes a global learning method in the BR framework, where base learners are not added step by step, but all are calculated in a single step looking for ensemble performance. This method considers (i) the configurations of the hidden layer are different for each base learner, (ii) the base learners are optimised all at once, not sequentially, thus avoiding saturation, and (iii) the ensemble methodology does not have the disadvantage of working with strong classifiers. Various regression and classification benchmark datasets have been selected to compare this method with the original BRELM implementation and other state-of-the-art algorithms. Particularly, 71 datasets for classification and 52 for regression, have been considered using different metrics and analysing different characteristics of the datasets, such as the size, the number of classes or the imbalanced nature of them. Statistical tests indicate the superiority of the proposed method in both regression and classification problems in all experimental scenarios. Nature Publishing Group UK 2023-07-21 /pmc/articles/PMC10362034/ /pubmed/37479841 http://dx.doi.org/10.1038/s41598-023-38948-3 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Peralez-González, Carlos
Pérez-Rodríguez, Javier
Durán-Rosal, Antonio M.
Boosting ridge for the extreme learning machine globally optimised for classification and regression problems
title Boosting ridge for the extreme learning machine globally optimised for classification and regression problems
title_full Boosting ridge for the extreme learning machine globally optimised for classification and regression problems
title_fullStr Boosting ridge for the extreme learning machine globally optimised for classification and regression problems
title_full_unstemmed Boosting ridge for the extreme learning machine globally optimised for classification and regression problems
title_short Boosting ridge for the extreme learning machine globally optimised for classification and regression problems
title_sort boosting ridge for the extreme learning machine globally optimised for classification and regression problems
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10362034/
https://www.ncbi.nlm.nih.gov/pubmed/37479841
http://dx.doi.org/10.1038/s41598-023-38948-3
work_keys_str_mv AT peralezgonzalezcarlos boostingridgefortheextremelearningmachinegloballyoptimisedforclassificationandregressionproblems
AT perezrodriguezjavier boostingridgefortheextremelearningmachinegloballyoptimisedforclassificationandregressionproblems
AT duranrosalantoniom boostingridgefortheextremelearningmachinegloballyoptimisedforclassificationandregressionproblems