Cargando…

An Improved Distributed Sampling PPO Algorithm Based on Beta Policy for Continuous Global Path Planning Scheme

Traditional path planning is mainly utilized for path planning in discrete action space, which results in incomplete ship navigation power propulsion strategies during the path search process. Moreover, reinforcement learning experiences low success rates due to its unbalanced sample collection and...

Descripción completa

Detalles Bibliográficos
Autores principales: Xiao, Qianhao, Jiang, Li, Wang, Manman, Zhang, Xin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10346433/
https://www.ncbi.nlm.nih.gov/pubmed/37447949
http://dx.doi.org/10.3390/s23136101
_version_ 1785073312202752000
author Xiao, Qianhao
Jiang, Li
Wang, Manman
Zhang, Xin
author_facet Xiao, Qianhao
Jiang, Li
Wang, Manman
Zhang, Xin
author_sort Xiao, Qianhao
collection PubMed
description Traditional path planning is mainly utilized for path planning in discrete action space, which results in incomplete ship navigation power propulsion strategies during the path search process. Moreover, reinforcement learning experiences low success rates due to its unbalanced sample collection and unreasonable design of reward function. In this paper, an environment framework is designed, which is constructed using the Box2D physics engine and employs a reward function, with the distance between the agent and arrival point as the main, and the potential field superimposed by boundary control, obstacles, and arrival point as the supplement. We also employ the state-of-the-art PPO (Proximal Policy Optimization) algorithm as a baseline for global path planning to address the issue of incomplete ship navigation power propulsion strategy. Additionally, a Beta policy-based distributed sample collection PPO algorithm is proposed to overcome the problem of unbalanced sample collection in path planning by dividing sub-regions to achieve distributed sample collection. The experimental results show the following: (1) The distributed sample collection training policy exhibits stronger robustness in the PPO algorithm; (2) The introduced Beta policy for action sampling results in a higher path planning success rate and reward accumulation than the Gaussian policy at the same training time; (3) When planning a path of the same length, the proposed Beta policy-based distributed sample collection PPO algorithm generates a smoother path than traditional path planning algorithms, such as A*, IDA*, and Dijkstra.
format Online
Article
Text
id pubmed-10346433
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-103464332023-07-15 An Improved Distributed Sampling PPO Algorithm Based on Beta Policy for Continuous Global Path Planning Scheme Xiao, Qianhao Jiang, Li Wang, Manman Zhang, Xin Sensors (Basel) Article Traditional path planning is mainly utilized for path planning in discrete action space, which results in incomplete ship navigation power propulsion strategies during the path search process. Moreover, reinforcement learning experiences low success rates due to its unbalanced sample collection and unreasonable design of reward function. In this paper, an environment framework is designed, which is constructed using the Box2D physics engine and employs a reward function, with the distance between the agent and arrival point as the main, and the potential field superimposed by boundary control, obstacles, and arrival point as the supplement. We also employ the state-of-the-art PPO (Proximal Policy Optimization) algorithm as a baseline for global path planning to address the issue of incomplete ship navigation power propulsion strategy. Additionally, a Beta policy-based distributed sample collection PPO algorithm is proposed to overcome the problem of unbalanced sample collection in path planning by dividing sub-regions to achieve distributed sample collection. The experimental results show the following: (1) The distributed sample collection training policy exhibits stronger robustness in the PPO algorithm; (2) The introduced Beta policy for action sampling results in a higher path planning success rate and reward accumulation than the Gaussian policy at the same training time; (3) When planning a path of the same length, the proposed Beta policy-based distributed sample collection PPO algorithm generates a smoother path than traditional path planning algorithms, such as A*, IDA*, and Dijkstra. MDPI 2023-07-02 /pmc/articles/PMC10346433/ /pubmed/37447949 http://dx.doi.org/10.3390/s23136101 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Xiao, Qianhao
Jiang, Li
Wang, Manman
Zhang, Xin
An Improved Distributed Sampling PPO Algorithm Based on Beta Policy for Continuous Global Path Planning Scheme
title An Improved Distributed Sampling PPO Algorithm Based on Beta Policy for Continuous Global Path Planning Scheme
title_full An Improved Distributed Sampling PPO Algorithm Based on Beta Policy for Continuous Global Path Planning Scheme
title_fullStr An Improved Distributed Sampling PPO Algorithm Based on Beta Policy for Continuous Global Path Planning Scheme
title_full_unstemmed An Improved Distributed Sampling PPO Algorithm Based on Beta Policy for Continuous Global Path Planning Scheme
title_short An Improved Distributed Sampling PPO Algorithm Based on Beta Policy for Continuous Global Path Planning Scheme
title_sort improved distributed sampling ppo algorithm based on beta policy for continuous global path planning scheme
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10346433/
https://www.ncbi.nlm.nih.gov/pubmed/37447949
http://dx.doi.org/10.3390/s23136101
work_keys_str_mv AT xiaoqianhao animproveddistributedsamplingppoalgorithmbasedonbetapolicyforcontinuousglobalpathplanningscheme
AT jiangli animproveddistributedsamplingppoalgorithmbasedonbetapolicyforcontinuousglobalpathplanningscheme
AT wangmanman animproveddistributedsamplingppoalgorithmbasedonbetapolicyforcontinuousglobalpathplanningscheme
AT zhangxin animproveddistributedsamplingppoalgorithmbasedonbetapolicyforcontinuousglobalpathplanningscheme
AT xiaoqianhao improveddistributedsamplingppoalgorithmbasedonbetapolicyforcontinuousglobalpathplanningscheme
AT jiangli improveddistributedsamplingppoalgorithmbasedonbetapolicyforcontinuousglobalpathplanningscheme
AT wangmanman improveddistributedsamplingppoalgorithmbasedonbetapolicyforcontinuousglobalpathplanningscheme
AT zhangxin improveddistributedsamplingppoalgorithmbasedonbetapolicyforcontinuousglobalpathplanningscheme