Cargando…

Training a Feedforward Neural Network Using Hybrid Gravitational Search Algorithm with Dynamic Multiswarm Particle Swarm Optimization

One of the most well-known methods for solving real-world and complex optimization problems is the gravitational search algorithm (GSA). The gravitational search technique suffers from a sluggish convergence rate and weak local search capabilities while solving complicated optimization problems. A u...

Descripción completa

Detalles Bibliográficos
Autores principales: Nagra, Arfan Ali, Alyas, Tahir, Hamid, Muhammad, Tabassum, Nadia, Ahmad, Aqeel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9192231/
https://www.ncbi.nlm.nih.gov/pubmed/35707376
http://dx.doi.org/10.1155/2022/2636515
Descripción
Sumario:One of the most well-known methods for solving real-world and complex optimization problems is the gravitational search algorithm (GSA). The gravitational search technique suffers from a sluggish convergence rate and weak local search capabilities while solving complicated optimization problems. A unique hybrid population-based strategy is designed to tackle the problem by combining dynamic multiswarm particle swarm optimization with gravitational search algorithm (GSADMSPSO). In this manuscript, GSADMSPSO is used as novel training techniques for Feedforward Neural Networks (FNNs) in order to test the algorithm's efficiency in decreasing the issues of local minima trapping and existing evolutionary learning methods' poor convergence rate. A novel method GSADMSPSO distributes the primary population of masses into smaller subswarms, according to the proposed algorithm, and also stabilizes them by offering a new neighborhood plan. At this time, each agent (particle) increases its position and velocity by using the suggested algorithm's global search capability. The fundamental concept is to combine GSA's ability with DMSPSO's to improve the performance of a given algorithm's exploration and exploitation. The suggested algorithm's performance on a range of well-known benchmark test functions, GSA, and its variations is compared. The results of the experiments suggest that the proposed method outperforms the other variants in terms of convergence speed and avoiding local minima; FNNs are being trained.