Cargando…
Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning
It is challenging to avoid obstacles safely and efficiently for multiple robots of different shapes in distributed and communication-free scenarios, where robots do not communicate with each other and only sense other robots’ positions and obstacles around them. Most existing multi-robot collision a...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7506975/ https://www.ncbi.nlm.nih.gov/pubmed/32867080 http://dx.doi.org/10.3390/s20174836 |
_version_ | 1783585136330145792 |
---|---|
author | Chen, Guangda Yao, Shunyi Ma, Jun Pan, Lifan Chen, Yu’an Xu, Pei Ji, Jianmin Chen, Xiaoping |
author_facet | Chen, Guangda Yao, Shunyi Ma, Jun Pan, Lifan Chen, Yu’an Xu, Pei Ji, Jianmin Chen, Xiaoping |
author_sort | Chen, Guangda |
collection | PubMed |
description | It is challenging to avoid obstacles safely and efficiently for multiple robots of different shapes in distributed and communication-free scenarios, where robots do not communicate with each other and only sense other robots’ positions and obstacles around them. Most existing multi-robot collision avoidance systems either require communication between robots or require expensive movement data of other robots, like velocities, accelerations and paths. In this paper, we propose a map-based deep reinforcement learning approach for multi-robot collision avoidance in a distributed and communication-free environment. We use the egocentric local grid map of a robot to represent the environmental information around it including its shape and observable appearances of other robots and obstacles, which can be easily generated by using multiple sensors or sensor fusion. Then we apply the distributed proximal policy optimization (DPPO) algorithm to train a convolutional neural network that directly maps three frames of egocentric local grid maps and the robot’s relative local goal positions into low-level robot control commands. Compared to other methods, the map-based approach is more robust to noisy sensor data, does not require robots’ movement data and considers sizes and shapes of related robots, which make it to be more efficient and easier to be deployed to real robots. We first train the neural network in a specified simulator of multiple mobile robots using DPPO, where a multi-stage curriculum learning strategy for multiple scenarios is used to improve the performance. Then we deploy the trained model to real robots to perform collision avoidance in their navigation without tedious parameter tuning. We evaluate the approach with multiple scenarios both in the simulator and on four differential-drive mobile robots in the real world. Both qualitative and quantitative experiments show that our approach is efficient and outperforms existing DRL-based approaches in many indicators. We also conduct ablation studies showing the positive effects of using egocentric grid maps and multi-stage curriculum learning. |
format | Online Article Text |
id | pubmed-7506975 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-75069752020-09-30 Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning Chen, Guangda Yao, Shunyi Ma, Jun Pan, Lifan Chen, Yu’an Xu, Pei Ji, Jianmin Chen, Xiaoping Sensors (Basel) Article It is challenging to avoid obstacles safely and efficiently for multiple robots of different shapes in distributed and communication-free scenarios, where robots do not communicate with each other and only sense other robots’ positions and obstacles around them. Most existing multi-robot collision avoidance systems either require communication between robots or require expensive movement data of other robots, like velocities, accelerations and paths. In this paper, we propose a map-based deep reinforcement learning approach for multi-robot collision avoidance in a distributed and communication-free environment. We use the egocentric local grid map of a robot to represent the environmental information around it including its shape and observable appearances of other robots and obstacles, which can be easily generated by using multiple sensors or sensor fusion. Then we apply the distributed proximal policy optimization (DPPO) algorithm to train a convolutional neural network that directly maps three frames of egocentric local grid maps and the robot’s relative local goal positions into low-level robot control commands. Compared to other methods, the map-based approach is more robust to noisy sensor data, does not require robots’ movement data and considers sizes and shapes of related robots, which make it to be more efficient and easier to be deployed to real robots. We first train the neural network in a specified simulator of multiple mobile robots using DPPO, where a multi-stage curriculum learning strategy for multiple scenarios is used to improve the performance. Then we deploy the trained model to real robots to perform collision avoidance in their navigation without tedious parameter tuning. We evaluate the approach with multiple scenarios both in the simulator and on four differential-drive mobile robots in the real world. Both qualitative and quantitative experiments show that our approach is efficient and outperforms existing DRL-based approaches in many indicators. We also conduct ablation studies showing the positive effects of using egocentric grid maps and multi-stage curriculum learning. MDPI 2020-08-27 /pmc/articles/PMC7506975/ /pubmed/32867080 http://dx.doi.org/10.3390/s20174836 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Chen, Guangda Yao, Shunyi Ma, Jun Pan, Lifan Chen, Yu’an Xu, Pei Ji, Jianmin Chen, Xiaoping Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning |
title | Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning |
title_full | Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning |
title_fullStr | Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning |
title_full_unstemmed | Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning |
title_short | Distributed Non-Communicating Multi-Robot Collision Avoidance via Map-Based Deep Reinforcement Learning |
title_sort | distributed non-communicating multi-robot collision avoidance via map-based deep reinforcement learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7506975/ https://www.ncbi.nlm.nih.gov/pubmed/32867080 http://dx.doi.org/10.3390/s20174836 |
work_keys_str_mv | AT chenguangda distributednoncommunicatingmultirobotcollisionavoidanceviamapbaseddeepreinforcementlearning AT yaoshunyi distributednoncommunicatingmultirobotcollisionavoidanceviamapbaseddeepreinforcementlearning AT majun distributednoncommunicatingmultirobotcollisionavoidanceviamapbaseddeepreinforcementlearning AT panlifan distributednoncommunicatingmultirobotcollisionavoidanceviamapbaseddeepreinforcementlearning AT chenyuan distributednoncommunicatingmultirobotcollisionavoidanceviamapbaseddeepreinforcementlearning AT xupei distributednoncommunicatingmultirobotcollisionavoidanceviamapbaseddeepreinforcementlearning AT jijianmin distributednoncommunicatingmultirobotcollisionavoidanceviamapbaseddeepreinforcementlearning AT chenxiaoping distributednoncommunicatingmultirobotcollisionavoidanceviamapbaseddeepreinforcementlearning |