Cargando…
RL-DOVS: Reinforcement Learning for Autonomous Robot Navigation in Dynamic Environments
Autonomous navigation in dynamic environments where people move unpredictably is an essential task for service robots in real-world populated scenarios. Recent works in reinforcement learning (RL) have been applied to autonomous vehicle driving and to navigation around pedestrians. In this paper, we...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9144338/ https://www.ncbi.nlm.nih.gov/pubmed/35632257 http://dx.doi.org/10.3390/s22103847 |
Sumario: | Autonomous navigation in dynamic environments where people move unpredictably is an essential task for service robots in real-world populated scenarios. Recent works in reinforcement learning (RL) have been applied to autonomous vehicle driving and to navigation around pedestrians. In this paper, we present a novel planner (reinforcement learning dynamic object velocity space, RL-DOVS) based on an RL technique for dynamic environments. The method explicitly considers the robot kinodynamic constraints for selecting the actions in every control period. The main contribution of our work is to use an environment model where the dynamism is represented in the robocentric velocity space as input to the learning system. The use of this dynamic information speeds the training process with respect to other techniques that learn directly either from raw sensors (vision, lidar) or from basic information about obstacle location and kinematics. We propose two approaches using RL and dynamic obstacle velocity (DOVS), RL-DOVS-A, which automatically learns the actions having the maximum utility, and RL-DOVS-D, in which the actions are selected by a human driver. Simulation results and evaluation are presented using different numbers of active agents and static and moving passive agents with random motion directions and velocities in many different scenarios. The performance of the technique is compared with other state-of-the-art techniques for solving navigation problems in environments such as ours. |
---|