Cargando…

A combination of ridge and Liu regressions for extreme learning machine

Extreme learning machine (ELM) as a type of feedforward neural network has been widely used to obtain beneficial insights from various disciplines and real-world applications. Despite the advantages like speed and highly adaptability, instability drawbacks arise in case of multicollinearity, and to...

Descripción completa

Detalles Bibliográficos
Autores principales: Yıldırım, Hasan, Özkale, M. Revan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9774081/
https://www.ncbi.nlm.nih.gov/pubmed/36573103
http://dx.doi.org/10.1007/s00500-022-07745-x
Descripción
Sumario:Extreme learning machine (ELM) as a type of feedforward neural network has been widely used to obtain beneficial insights from various disciplines and real-world applications. Despite the advantages like speed and highly adaptability, instability drawbacks arise in case of multicollinearity, and to overcome this, additional improvements were needed. Regularization is one of the best choices to overcome these drawbacks. Although ridge and Liu regressions have been considered and seemed effective regularization methods on ELM algorithm, each one has own characteristic features such as the form of tuning parameter, the level of shrinkage or the norm of coefficients. Instead of focusing on one of these regularization methods, we propose a combination of ridge and Liu regressions in a unified form for the context of ELM as a remedy to aforementioned drawbacks. To investigate the performance of the proposed algorithm, comprehensive comparisons have been carried out by using various real-world data sets. Based on the results, it is obtained that the proposed algorithm is more effective than the ELM and its variants based on ridge and Liu regressions, RR-ELM and Liu-ELM, in terms of the capability of generalization. Generalization performance of proposed algorithm on ELM is remarkable when compared to RR-ELM and Liu-ELM, and the generalization performance of the proposed algorithm on ELM increases as the number of nodes increases. The proposed algorithm outperforms ELM in all data sets and all node numbers in that it has a smaller norm and standard deviation of the norm. Additionally, it should be noted that the proposed algorithm can be applied for both regression and classification problems.