Cargando…

RL-DOVS: Reinforcement Learning for Autonomous Robot Navigation in Dynamic Environments

Autonomous navigation in dynamic environments where people move unpredictably is an essential task for service robots in real-world populated scenarios. Recent works in reinforcement learning (RL) have been applied to autonomous vehicle driving and to navigation around pedestrians. In this paper, we...

Descripción completa

Detalles Bibliográficos
Autores principales: Mackay, Andrew K., Riazuelo, Luis, Montano, Luis
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9144338/
https://www.ncbi.nlm.nih.gov/pubmed/35632257
http://dx.doi.org/10.3390/s22103847
_version_ 1784716024134762496
author Mackay, Andrew K.
Riazuelo, Luis
Montano, Luis
author_facet Mackay, Andrew K.
Riazuelo, Luis
Montano, Luis
author_sort Mackay, Andrew K.
collection PubMed
description Autonomous navigation in dynamic environments where people move unpredictably is an essential task for service robots in real-world populated scenarios. Recent works in reinforcement learning (RL) have been applied to autonomous vehicle driving and to navigation around pedestrians. In this paper, we present a novel planner (reinforcement learning dynamic object velocity space, RL-DOVS) based on an RL technique for dynamic environments. The method explicitly considers the robot kinodynamic constraints for selecting the actions in every control period. The main contribution of our work is to use an environment model where the dynamism is represented in the robocentric velocity space as input to the learning system. The use of this dynamic information speeds the training process with respect to other techniques that learn directly either from raw sensors (vision, lidar) or from basic information about obstacle location and kinematics. We propose two approaches using RL and dynamic obstacle velocity (DOVS), RL-DOVS-A, which automatically learns the actions having the maximum utility, and RL-DOVS-D, in which the actions are selected by a human driver. Simulation results and evaluation are presented using different numbers of active agents and static and moving passive agents with random motion directions and velocities in many different scenarios. The performance of the technique is compared with other state-of-the-art techniques for solving navigation problems in environments such as ours.
format Online
Article
Text
id pubmed-9144338
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-91443382022-05-29 RL-DOVS: Reinforcement Learning for Autonomous Robot Navigation in Dynamic Environments Mackay, Andrew K. Riazuelo, Luis Montano, Luis Sensors (Basel) Article Autonomous navigation in dynamic environments where people move unpredictably is an essential task for service robots in real-world populated scenarios. Recent works in reinforcement learning (RL) have been applied to autonomous vehicle driving and to navigation around pedestrians. In this paper, we present a novel planner (reinforcement learning dynamic object velocity space, RL-DOVS) based on an RL technique for dynamic environments. The method explicitly considers the robot kinodynamic constraints for selecting the actions in every control period. The main contribution of our work is to use an environment model where the dynamism is represented in the robocentric velocity space as input to the learning system. The use of this dynamic information speeds the training process with respect to other techniques that learn directly either from raw sensors (vision, lidar) or from basic information about obstacle location and kinematics. We propose two approaches using RL and dynamic obstacle velocity (DOVS), RL-DOVS-A, which automatically learns the actions having the maximum utility, and RL-DOVS-D, in which the actions are selected by a human driver. Simulation results and evaluation are presented using different numbers of active agents and static and moving passive agents with random motion directions and velocities in many different scenarios. The performance of the technique is compared with other state-of-the-art techniques for solving navigation problems in environments such as ours. MDPI 2022-05-19 /pmc/articles/PMC9144338/ /pubmed/35632257 http://dx.doi.org/10.3390/s22103847 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Mackay, Andrew K.
Riazuelo, Luis
Montano, Luis
RL-DOVS: Reinforcement Learning for Autonomous Robot Navigation in Dynamic Environments
title RL-DOVS: Reinforcement Learning for Autonomous Robot Navigation in Dynamic Environments
title_full RL-DOVS: Reinforcement Learning for Autonomous Robot Navigation in Dynamic Environments
title_fullStr RL-DOVS: Reinforcement Learning for Autonomous Robot Navigation in Dynamic Environments
title_full_unstemmed RL-DOVS: Reinforcement Learning for Autonomous Robot Navigation in Dynamic Environments
title_short RL-DOVS: Reinforcement Learning for Autonomous Robot Navigation in Dynamic Environments
title_sort rl-dovs: reinforcement learning for autonomous robot navigation in dynamic environments
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9144338/
https://www.ncbi.nlm.nih.gov/pubmed/35632257
http://dx.doi.org/10.3390/s22103847
work_keys_str_mv AT mackayandrewk rldovsreinforcementlearningforautonomousrobotnavigationindynamicenvironments
AT riazueloluis rldovsreinforcementlearningforautonomousrobotnavigationindynamicenvironments
AT montanoluis rldovsreinforcementlearningforautonomousrobotnavigationindynamicenvironments