Cargando…

A parallel heterogeneous policy deep reinforcement learning algorithm for bipedal walking motion design

Considering the dynamics and non-linear characteristics of biped robots, gait optimization is an extremely challenging task. To tackle this issue, a parallel heterogeneous policy Deep Reinforcement Learning (DRL) algorithm for gait optimization is proposed. Firstly, the Deep Deterministic Policy Gra...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Chunguang, Li, Mengru, Tao, Chongben
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10442573/
https://www.ncbi.nlm.nih.gov/pubmed/37614967
http://dx.doi.org/10.3389/fnbot.2023.1205775
_version_ 1785093630698979328
author Li, Chunguang
Li, Mengru
Tao, Chongben
author_facet Li, Chunguang
Li, Mengru
Tao, Chongben
author_sort Li, Chunguang
collection PubMed
description Considering the dynamics and non-linear characteristics of biped robots, gait optimization is an extremely challenging task. To tackle this issue, a parallel heterogeneous policy Deep Reinforcement Learning (DRL) algorithm for gait optimization is proposed. Firstly, the Deep Deterministic Policy Gradient (DDPG) algorithm is used as the main architecture to run multiple biped robots in parallel to interact with the environment. And the network is shared to improve the training efficiency. Furthermore, heterogeneous experience replay is employed instead of the traditional experience replay mechanism to optimize the utilization of experience. Secondly, according to the walking characteristics of biped robots, a biped robot periodic gait is designed with reference to sinusoidal curves. The periodic gait takes into account the effects of foot lift height, walking period, foot lift speed and ground contact force of the biped robot. Finally, different environments and different biped robot models pose challenges for different optimization algorithms. Thus, a unified gait optimization framework for biped robots based on the RoboCup3D platform is established. Comparative experiments were conducted using the unified gait optimization framework, and the experimental results show that the method outlined in this paper can make the biped robot walk faster and more stably.
format Online
Article
Text
id pubmed-10442573
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-104425732023-08-23 A parallel heterogeneous policy deep reinforcement learning algorithm for bipedal walking motion design Li, Chunguang Li, Mengru Tao, Chongben Front Neurorobot Neuroscience Considering the dynamics and non-linear characteristics of biped robots, gait optimization is an extremely challenging task. To tackle this issue, a parallel heterogeneous policy Deep Reinforcement Learning (DRL) algorithm for gait optimization is proposed. Firstly, the Deep Deterministic Policy Gradient (DDPG) algorithm is used as the main architecture to run multiple biped robots in parallel to interact with the environment. And the network is shared to improve the training efficiency. Furthermore, heterogeneous experience replay is employed instead of the traditional experience replay mechanism to optimize the utilization of experience. Secondly, according to the walking characteristics of biped robots, a biped robot periodic gait is designed with reference to sinusoidal curves. The periodic gait takes into account the effects of foot lift height, walking period, foot lift speed and ground contact force of the biped robot. Finally, different environments and different biped robot models pose challenges for different optimization algorithms. Thus, a unified gait optimization framework for biped robots based on the RoboCup3D platform is established. Comparative experiments were conducted using the unified gait optimization framework, and the experimental results show that the method outlined in this paper can make the biped robot walk faster and more stably. Frontiers Media S.A. 2023-08-08 /pmc/articles/PMC10442573/ /pubmed/37614967 http://dx.doi.org/10.3389/fnbot.2023.1205775 Text en Copyright © 2023 Li, Li and Tao. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Li, Chunguang
Li, Mengru
Tao, Chongben
A parallel heterogeneous policy deep reinforcement learning algorithm for bipedal walking motion design
title A parallel heterogeneous policy deep reinforcement learning algorithm for bipedal walking motion design
title_full A parallel heterogeneous policy deep reinforcement learning algorithm for bipedal walking motion design
title_fullStr A parallel heterogeneous policy deep reinforcement learning algorithm for bipedal walking motion design
title_full_unstemmed A parallel heterogeneous policy deep reinforcement learning algorithm for bipedal walking motion design
title_short A parallel heterogeneous policy deep reinforcement learning algorithm for bipedal walking motion design
title_sort parallel heterogeneous policy deep reinforcement learning algorithm for bipedal walking motion design
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10442573/
https://www.ncbi.nlm.nih.gov/pubmed/37614967
http://dx.doi.org/10.3389/fnbot.2023.1205775
work_keys_str_mv AT lichunguang aparallelheterogeneouspolicydeepreinforcementlearningalgorithmforbipedalwalkingmotiondesign
AT limengru aparallelheterogeneouspolicydeepreinforcementlearningalgorithmforbipedalwalkingmotiondesign
AT taochongben aparallelheterogeneouspolicydeepreinforcementlearningalgorithmforbipedalwalkingmotiondesign
AT lichunguang parallelheterogeneouspolicydeepreinforcementlearningalgorithmforbipedalwalkingmotiondesign
AT limengru parallelheterogeneouspolicydeepreinforcementlearningalgorithmforbipedalwalkingmotiondesign
AT taochongben parallelheterogeneouspolicydeepreinforcementlearningalgorithmforbipedalwalkingmotiondesign