Cargando…

A hybrid differential evolution based on gaining‑sharing knowledge algorithm and harris hawks optimization

Differential evolution (DE) is favored by scholars for its simplicity and efficiency, but its ability to balance exploration and exploitation needs to be enhanced. In this paper, a hybrid differential evolution with gaining-sharing knowledge algorithm (GSK) and harris hawks optimization (HHO) is pro...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhong, Xuxu, Duan, Meijun, Zhang, Xiao, Cheng, Peng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8087089/
https://www.ncbi.nlm.nih.gov/pubmed/33930074
http://dx.doi.org/10.1371/journal.pone.0250951
_version_ 1783686613461630976
author Zhong, Xuxu
Duan, Meijun
Zhang, Xiao
Cheng, Peng
author_facet Zhong, Xuxu
Duan, Meijun
Zhang, Xiao
Cheng, Peng
author_sort Zhong, Xuxu
collection PubMed
description Differential evolution (DE) is favored by scholars for its simplicity and efficiency, but its ability to balance exploration and exploitation needs to be enhanced. In this paper, a hybrid differential evolution with gaining-sharing knowledge algorithm (GSK) and harris hawks optimization (HHO) is proposed, abbreviated as DEGH. Its main contribution lies are as follows. First, a hybrid mutation operator is constructed in DEGH, in which the two-phase strategy of GSK, the classical mutation operator “rand/1” of DE and the soft besiege rule of HHO are used and improved, forming a double-insurance mechanism for the balance between exploration and exploitation. Second, a novel crossover probability self-adaption strategy is proposed to strengthen the internal relation among mutation, crossover and selection of DE. On this basis, the crossover probability and scaling factor jointly affect the evolution of each individual, thus making the proposed algorithm can better adapt to various optimization problems. In addition, DEGH is compared with eight state-of-the-art DE algorithms on 32 benchmark functions. Experimental results show that the proposed DEGH algorithm is significantly superior to the compared algorithms.
format Online
Article
Text
id pubmed-8087089
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-80870892021-05-06 A hybrid differential evolution based on gaining‑sharing knowledge algorithm and harris hawks optimization Zhong, Xuxu Duan, Meijun Zhang, Xiao Cheng, Peng PLoS One Research Article Differential evolution (DE) is favored by scholars for its simplicity and efficiency, but its ability to balance exploration and exploitation needs to be enhanced. In this paper, a hybrid differential evolution with gaining-sharing knowledge algorithm (GSK) and harris hawks optimization (HHO) is proposed, abbreviated as DEGH. Its main contribution lies are as follows. First, a hybrid mutation operator is constructed in DEGH, in which the two-phase strategy of GSK, the classical mutation operator “rand/1” of DE and the soft besiege rule of HHO are used and improved, forming a double-insurance mechanism for the balance between exploration and exploitation. Second, a novel crossover probability self-adaption strategy is proposed to strengthen the internal relation among mutation, crossover and selection of DE. On this basis, the crossover probability and scaling factor jointly affect the evolution of each individual, thus making the proposed algorithm can better adapt to various optimization problems. In addition, DEGH is compared with eight state-of-the-art DE algorithms on 32 benchmark functions. Experimental results show that the proposed DEGH algorithm is significantly superior to the compared algorithms. Public Library of Science 2021-04-30 /pmc/articles/PMC8087089/ /pubmed/33930074 http://dx.doi.org/10.1371/journal.pone.0250951 Text en © 2021 Zhong et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Zhong, Xuxu
Duan, Meijun
Zhang, Xiao
Cheng, Peng
A hybrid differential evolution based on gaining‑sharing knowledge algorithm and harris hawks optimization
title A hybrid differential evolution based on gaining‑sharing knowledge algorithm and harris hawks optimization
title_full A hybrid differential evolution based on gaining‑sharing knowledge algorithm and harris hawks optimization
title_fullStr A hybrid differential evolution based on gaining‑sharing knowledge algorithm and harris hawks optimization
title_full_unstemmed A hybrid differential evolution based on gaining‑sharing knowledge algorithm and harris hawks optimization
title_short A hybrid differential evolution based on gaining‑sharing knowledge algorithm and harris hawks optimization
title_sort hybrid differential evolution based on gaining‑sharing knowledge algorithm and harris hawks optimization
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8087089/
https://www.ncbi.nlm.nih.gov/pubmed/33930074
http://dx.doi.org/10.1371/journal.pone.0250951
work_keys_str_mv AT zhongxuxu ahybriddifferentialevolutionbasedongainingsharingknowledgealgorithmandharrishawksoptimization
AT duanmeijun ahybriddifferentialevolutionbasedongainingsharingknowledgealgorithmandharrishawksoptimization
AT zhangxiao ahybriddifferentialevolutionbasedongainingsharingknowledgealgorithmandharrishawksoptimization
AT chengpeng ahybriddifferentialevolutionbasedongainingsharingknowledgealgorithmandharrishawksoptimization
AT zhongxuxu hybriddifferentialevolutionbasedongainingsharingknowledgealgorithmandharrishawksoptimization
AT duanmeijun hybriddifferentialevolutionbasedongainingsharingknowledgealgorithmandharrishawksoptimization
AT zhangxiao hybriddifferentialevolutionbasedongainingsharingknowledgealgorithmandharrishawksoptimization
AT chengpeng hybriddifferentialevolutionbasedongainingsharingknowledgealgorithmandharrishawksoptimization