Cargando…

Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment

In the field of reinforcement learning, we propose a Correct Proximal Policy Optimization (CPPO) algorithm based on the modified penalty factor β and relative entropy in order to solve the robustness and stationarity of traditional algorithms. Firstly, In the process of reinforcement learning, this...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Weimin, Wong, Kelvin Kian Loong, Long, Sifan, Sun, Zhili
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9031020/
https://www.ncbi.nlm.nih.gov/pubmed/35455103
http://dx.doi.org/10.3390/e24040440
_version_ 1784692286998708224
author Chen, Weimin
Wong, Kelvin Kian Loong
Long, Sifan
Sun, Zhili
author_facet Chen, Weimin
Wong, Kelvin Kian Loong
Long, Sifan
Sun, Zhili
author_sort Chen, Weimin
collection PubMed
description In the field of reinforcement learning, we propose a Correct Proximal Policy Optimization (CPPO) algorithm based on the modified penalty factor β and relative entropy in order to solve the robustness and stationarity of traditional algorithms. Firstly, In the process of reinforcement learning, this paper establishes a strategy evaluation mechanism through the policy distribution function. Secondly, the state space function is quantified by introducing entropy, whereby the approximation policy is used to approximate the real policy distribution, and the kernel function estimation and calculation of relative entropy is used to fit the reward function based on complex problem. Finally, through the comparative analysis on the classic test cases, we demonstrated that our proposed algorithm is effective, has a faster convergence speed and better performance than the traditional PPO algorithm, and the measure of the relative entropy can show the differences. In addition, it can more efficiently use the information of complex environment to learn policies. At the same time, not only can our paper explain the rationality of the policy distribution theory, the proposed framework can also balance between iteration steps, computational complexity and convergence speed, and we also introduced an effective measure of performance using the relative entropy concept.
format Online
Article
Text
id pubmed-9031020
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-90310202022-04-23 Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment Chen, Weimin Wong, Kelvin Kian Loong Long, Sifan Sun, Zhili Entropy (Basel) Article In the field of reinforcement learning, we propose a Correct Proximal Policy Optimization (CPPO) algorithm based on the modified penalty factor β and relative entropy in order to solve the robustness and stationarity of traditional algorithms. Firstly, In the process of reinforcement learning, this paper establishes a strategy evaluation mechanism through the policy distribution function. Secondly, the state space function is quantified by introducing entropy, whereby the approximation policy is used to approximate the real policy distribution, and the kernel function estimation and calculation of relative entropy is used to fit the reward function based on complex problem. Finally, through the comparative analysis on the classic test cases, we demonstrated that our proposed algorithm is effective, has a faster convergence speed and better performance than the traditional PPO algorithm, and the measure of the relative entropy can show the differences. In addition, it can more efficiently use the information of complex environment to learn policies. At the same time, not only can our paper explain the rationality of the policy distribution theory, the proposed framework can also balance between iteration steps, computational complexity and convergence speed, and we also introduced an effective measure of performance using the relative entropy concept. MDPI 2022-03-22 /pmc/articles/PMC9031020/ /pubmed/35455103 http://dx.doi.org/10.3390/e24040440 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Chen, Weimin
Wong, Kelvin Kian Loong
Long, Sifan
Sun, Zhili
Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment
title Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment
title_full Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment
title_fullStr Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment
title_full_unstemmed Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment
title_short Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment
title_sort relative entropy of correct proximal policy optimization algorithms with modified penalty factor in complex environment
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9031020/
https://www.ncbi.nlm.nih.gov/pubmed/35455103
http://dx.doi.org/10.3390/e24040440
work_keys_str_mv AT chenweimin relativeentropyofcorrectproximalpolicyoptimizationalgorithmswithmodifiedpenaltyfactorincomplexenvironment
AT wongkelvinkianloong relativeentropyofcorrectproximalpolicyoptimizationalgorithmswithmodifiedpenaltyfactorincomplexenvironment
AT longsifan relativeentropyofcorrectproximalpolicyoptimizationalgorithmswithmodifiedpenaltyfactorincomplexenvironment
AT sunzhili relativeentropyofcorrectproximalpolicyoptimizationalgorithmswithmodifiedpenaltyfactorincomplexenvironment