Cargando…

A Distributed Multi-Agent Formation Control Method Based on Deep Q Learning

Distributed control method plays an important role in the formation of a multi-agent system (MAS), which is the prerequisite for an MAS to complete its missions. However, the lack of considering the collision risk between agents makes many distributed formation control methods lose practicability. I...

Descripción completa

Detalles Bibliográficos
Autores principales: Xie, Nianhao, Hu, Yunpeng, Chen, Lei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9008406/
https://www.ncbi.nlm.nih.gov/pubmed/35431851
http://dx.doi.org/10.3389/fnbot.2022.817168
_version_ 1784687046947766272
author Xie, Nianhao
Hu, Yunpeng
Chen, Lei
author_facet Xie, Nianhao
Hu, Yunpeng
Chen, Lei
author_sort Xie, Nianhao
collection PubMed
description Distributed control method plays an important role in the formation of a multi-agent system (MAS), which is the prerequisite for an MAS to complete its missions. However, the lack of considering the collision risk between agents makes many distributed formation control methods lose practicability. In this article, a distributed formation control method that takes collision avoidance into account is proposed. At first, the MAS formation control problem can be divided into pair-wise unit formation problems where each agent moves to the expected position and only needs to avoid one obstacle. Then, a deep Q network (DQN) is applied to model the agent's unit controller for this pair-wise unit formation. The DQN controller is trained by using reshaped reward function and prioritized experience replay. The agents in MAS formation share the same unit DQN controller but get different commands due to various observations. Finally, through the min-max fusion of value functions of the DQN controller, the agent can always respond to the most dangerous avoidance. In this way, we get an easy-to-train multi-agent collision avoidance formation control method. In the end, unit formation simulation and multi-agent formation simulation results are presented to verify our method.
format Online
Article
Text
id pubmed-9008406
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-90084062022-04-15 A Distributed Multi-Agent Formation Control Method Based on Deep Q Learning Xie, Nianhao Hu, Yunpeng Chen, Lei Front Neurorobot Neuroscience Distributed control method plays an important role in the formation of a multi-agent system (MAS), which is the prerequisite for an MAS to complete its missions. However, the lack of considering the collision risk between agents makes many distributed formation control methods lose practicability. In this article, a distributed formation control method that takes collision avoidance into account is proposed. At first, the MAS formation control problem can be divided into pair-wise unit formation problems where each agent moves to the expected position and only needs to avoid one obstacle. Then, a deep Q network (DQN) is applied to model the agent's unit controller for this pair-wise unit formation. The DQN controller is trained by using reshaped reward function and prioritized experience replay. The agents in MAS formation share the same unit DQN controller but get different commands due to various observations. Finally, through the min-max fusion of value functions of the DQN controller, the agent can always respond to the most dangerous avoidance. In this way, we get an easy-to-train multi-agent collision avoidance formation control method. In the end, unit formation simulation and multi-agent formation simulation results are presented to verify our method. Frontiers Media S.A. 2022-03-31 /pmc/articles/PMC9008406/ /pubmed/35431851 http://dx.doi.org/10.3389/fnbot.2022.817168 Text en Copyright © 2022 Xie, Hu and Chen. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Xie, Nianhao
Hu, Yunpeng
Chen, Lei
A Distributed Multi-Agent Formation Control Method Based on Deep Q Learning
title A Distributed Multi-Agent Formation Control Method Based on Deep Q Learning
title_full A Distributed Multi-Agent Formation Control Method Based on Deep Q Learning
title_fullStr A Distributed Multi-Agent Formation Control Method Based on Deep Q Learning
title_full_unstemmed A Distributed Multi-Agent Formation Control Method Based on Deep Q Learning
title_short A Distributed Multi-Agent Formation Control Method Based on Deep Q Learning
title_sort distributed multi-agent formation control method based on deep q learning
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9008406/
https://www.ncbi.nlm.nih.gov/pubmed/35431851
http://dx.doi.org/10.3389/fnbot.2022.817168
work_keys_str_mv AT xienianhao adistributedmultiagentformationcontrolmethodbasedondeepqlearning
AT huyunpeng adistributedmultiagentformationcontrolmethodbasedondeepqlearning
AT chenlei adistributedmultiagentformationcontrolmethodbasedondeepqlearning
AT xienianhao distributedmultiagentformationcontrolmethodbasedondeepqlearning
AT huyunpeng distributedmultiagentformationcontrolmethodbasedondeepqlearning
AT chenlei distributedmultiagentformationcontrolmethodbasedondeepqlearning