Cargando…
Deep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints
With the advance in algorithms, deep reinforcement learning (DRL) offers solutions to trajectory planning under uncertain environments. Different from traditional trajectory planning which requires lots of effort to tackle complicated high-dimensional problems, the recently proposed DRL enables the...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9108367/ https://www.ncbi.nlm.nih.gov/pubmed/35586262 http://dx.doi.org/10.3389/fnbot.2022.883562 |
_version_ | 1784708687229616128 |
---|---|
author | Chen, Lienhung Jiang, Zhongliang Cheng, Long Knoll, Alois C. Zhou, Mingchuan |
author_facet | Chen, Lienhung Jiang, Zhongliang Cheng, Long Knoll, Alois C. Zhou, Mingchuan |
author_sort | Chen, Lienhung |
collection | PubMed |
description | With the advance in algorithms, deep reinforcement learning (DRL) offers solutions to trajectory planning under uncertain environments. Different from traditional trajectory planning which requires lots of effort to tackle complicated high-dimensional problems, the recently proposed DRL enables the robot manipulator to autonomously learn and discover optimal trajectory planning by interacting with the environment. In this article, we present state-of-the-art DRL-based collision-avoidance trajectory planning for uncertain environments such as a safe human coexistent environment. Since the robot manipulator operates in high dimensional continuous state-action spaces, model-free, policy gradient-based soft actor-critic (SAC), and deep deterministic policy gradient (DDPG) framework are adapted to our scenario for comparison. In order to assess our proposal, we simulate a 7-DOF Panda (Franka Emika) robot manipulator in the PyBullet physics engine and then evaluate its trajectory planning with reward, loss, safe rate, and accuracy. Finally, our final report shows the effectiveness of state-of-the-art DRL algorithms for trajectory planning under uncertain environments with zero collision after 5,000 episodes of training. |
format | Online Article Text |
id | pubmed-9108367 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-91083672022-05-17 Deep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints Chen, Lienhung Jiang, Zhongliang Cheng, Long Knoll, Alois C. Zhou, Mingchuan Front Neurorobot Neuroscience With the advance in algorithms, deep reinforcement learning (DRL) offers solutions to trajectory planning under uncertain environments. Different from traditional trajectory planning which requires lots of effort to tackle complicated high-dimensional problems, the recently proposed DRL enables the robot manipulator to autonomously learn and discover optimal trajectory planning by interacting with the environment. In this article, we present state-of-the-art DRL-based collision-avoidance trajectory planning for uncertain environments such as a safe human coexistent environment. Since the robot manipulator operates in high dimensional continuous state-action spaces, model-free, policy gradient-based soft actor-critic (SAC), and deep deterministic policy gradient (DDPG) framework are adapted to our scenario for comparison. In order to assess our proposal, we simulate a 7-DOF Panda (Franka Emika) robot manipulator in the PyBullet physics engine and then evaluate its trajectory planning with reward, loss, safe rate, and accuracy. Finally, our final report shows the effectiveness of state-of-the-art DRL algorithms for trajectory planning under uncertain environments with zero collision after 5,000 episodes of training. Frontiers Media S.A. 2022-05-02 /pmc/articles/PMC9108367/ /pubmed/35586262 http://dx.doi.org/10.3389/fnbot.2022.883562 Text en Copyright © 2022 Chen, Jiang, Cheng, Knoll and Zhou. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Chen, Lienhung Jiang, Zhongliang Cheng, Long Knoll, Alois C. Zhou, Mingchuan Deep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints |
title | Deep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints |
title_full | Deep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints |
title_fullStr | Deep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints |
title_full_unstemmed | Deep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints |
title_short | Deep Reinforcement Learning Based Trajectory Planning Under Uncertain Constraints |
title_sort | deep reinforcement learning based trajectory planning under uncertain constraints |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9108367/ https://www.ncbi.nlm.nih.gov/pubmed/35586262 http://dx.doi.org/10.3389/fnbot.2022.883562 |
work_keys_str_mv | AT chenlienhung deepreinforcementlearningbasedtrajectoryplanningunderuncertainconstraints AT jiangzhongliang deepreinforcementlearningbasedtrajectoryplanningunderuncertainconstraints AT chenglong deepreinforcementlearningbasedtrajectoryplanningunderuncertainconstraints AT knollaloisc deepreinforcementlearningbasedtrajectoryplanningunderuncertainconstraints AT zhoumingchuan deepreinforcementlearningbasedtrajectoryplanningunderuncertainconstraints |