Cargando…

Reinforcement learning for solution updating in Artificial Bee Colony

In the Artificial Bee Colony (ABC) algorithm, the employed bee and the onlooker bee phase involve updating the candidate solutions by changing a value in one dimension, dubbed one-dimension update process. For some problems which the number of dimensions is very high, the one-dimension update proces...

Descripción completa

Detalles Bibliográficos
Autores principales: Fairee, Suthida, Prom-On, Santitham, Sirinaovakul, Booncharoen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6049945/
https://www.ncbi.nlm.nih.gov/pubmed/30016357
http://dx.doi.org/10.1371/journal.pone.0200738
_version_ 1783340260964433920
author Fairee, Suthida
Prom-On, Santitham
Sirinaovakul, Booncharoen
author_facet Fairee, Suthida
Prom-On, Santitham
Sirinaovakul, Booncharoen
author_sort Fairee, Suthida
collection PubMed
description In the Artificial Bee Colony (ABC) algorithm, the employed bee and the onlooker bee phase involve updating the candidate solutions by changing a value in one dimension, dubbed one-dimension update process. For some problems which the number of dimensions is very high, the one-dimension update process can cause the solution quality and convergence speed drop. This paper proposes a new algorithm, using reinforcement learning for solution updating in ABC algorithm, called R-ABC. After updating a solution by an employed bee, the new solution results in positive or negative reinforcement applied to the solution dimensions in the onlooker bee phase. Positive reinforcement is given when the candidate solution from the employed bee phase provides a better fitness value. The more often a dimension provides a better fitness value when changed, the higher the value of update becomes in the onlooker bee phase. Conversely, negative reinforcement is given when the candidate solution does not provide a better fitness value. The performance of the proposed algorithm is assessed on eight basic numerical benchmark functions in four categories with 100, 500, 700, and 900 dimensions, seven CEC2005’s shifted functions with 100, 500, 700, and 900 dimensions, and six CEC2014’s hybrid functions with 100 dimensions. The results show that the proposed algorithm provides solutions which are significantly better than all other algorithms for all tested dimensions on basic benchmark functions. The number of solutions provided by the R-ABC algorithm which are significantly better than those of other algorithms increases when the number of dimensions increases on the CEC2005’s shifted functions. The R-ABC algorithm is at least comparable to the state-of-the-art ABC variants on the CEC2014’s hybrid functions.
format Online
Article
Text
id pubmed-6049945
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-60499452018-07-26 Reinforcement learning for solution updating in Artificial Bee Colony Fairee, Suthida Prom-On, Santitham Sirinaovakul, Booncharoen PLoS One Research Article In the Artificial Bee Colony (ABC) algorithm, the employed bee and the onlooker bee phase involve updating the candidate solutions by changing a value in one dimension, dubbed one-dimension update process. For some problems which the number of dimensions is very high, the one-dimension update process can cause the solution quality and convergence speed drop. This paper proposes a new algorithm, using reinforcement learning for solution updating in ABC algorithm, called R-ABC. After updating a solution by an employed bee, the new solution results in positive or negative reinforcement applied to the solution dimensions in the onlooker bee phase. Positive reinforcement is given when the candidate solution from the employed bee phase provides a better fitness value. The more often a dimension provides a better fitness value when changed, the higher the value of update becomes in the onlooker bee phase. Conversely, negative reinforcement is given when the candidate solution does not provide a better fitness value. The performance of the proposed algorithm is assessed on eight basic numerical benchmark functions in four categories with 100, 500, 700, and 900 dimensions, seven CEC2005’s shifted functions with 100, 500, 700, and 900 dimensions, and six CEC2014’s hybrid functions with 100 dimensions. The results show that the proposed algorithm provides solutions which are significantly better than all other algorithms for all tested dimensions on basic benchmark functions. The number of solutions provided by the R-ABC algorithm which are significantly better than those of other algorithms increases when the number of dimensions increases on the CEC2005’s shifted functions. The R-ABC algorithm is at least comparable to the state-of-the-art ABC variants on the CEC2014’s hybrid functions. Public Library of Science 2018-07-17 /pmc/articles/PMC6049945/ /pubmed/30016357 http://dx.doi.org/10.1371/journal.pone.0200738 Text en © 2018 Fairee et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Fairee, Suthida
Prom-On, Santitham
Sirinaovakul, Booncharoen
Reinforcement learning for solution updating in Artificial Bee Colony
title Reinforcement learning for solution updating in Artificial Bee Colony
title_full Reinforcement learning for solution updating in Artificial Bee Colony
title_fullStr Reinforcement learning for solution updating in Artificial Bee Colony
title_full_unstemmed Reinforcement learning for solution updating in Artificial Bee Colony
title_short Reinforcement learning for solution updating in Artificial Bee Colony
title_sort reinforcement learning for solution updating in artificial bee colony
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6049945/
https://www.ncbi.nlm.nih.gov/pubmed/30016357
http://dx.doi.org/10.1371/journal.pone.0200738
work_keys_str_mv AT faireesuthida reinforcementlearningforsolutionupdatinginartificialbeecolony
AT promonsantitham reinforcementlearningforsolutionupdatinginartificialbeecolony
AT sirinaovakulbooncharoen reinforcementlearningforsolutionupdatinginartificialbeecolony