Cargando…
A Parallel Evolutionary Algorithm with Value Decomposition for Multi-agent Problems
Many real-world problems involve cooperation and/or competition among multiple agents. These problems often can be formulated as multi-agent problems. Recently, Reinforcement Learning (RL) has made significant progress on single-agent problems. However, multi-agent problems still cannot be easily so...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7354799/ http://dx.doi.org/10.1007/978-3-030-53956-6_57 |
_version_ | 1783558167229104128 |
---|---|
author | Li, Gao Duan, Qiqi Shi, Yuhui |
author_facet | Li, Gao Duan, Qiqi Shi, Yuhui |
author_sort | Li, Gao |
collection | PubMed |
description | Many real-world problems involve cooperation and/or competition among multiple agents. These problems often can be formulated as multi-agent problems. Recently, Reinforcement Learning (RL) has made significant progress on single-agent problems. However, multi-agent problems still cannot be easily solved by traditional RL algorithms. First, the multi-agent environment is considered as a non-stationary system. Second, most multi-agent environments only provide a shared team reward as feedback. As a result, agents may not be able to learn proper cooperative or competitive behaviors by traditional RL. Our algorithm adopts Evolution Strategies (ES) for optimizing policy which is used to control agents and a value decomposition method for estimating proper fitness for each policy. Evolutionary Algorithm is considered as a promising alternative for signal-agent problems. Owing to its simplicity, scalability, and efficiency on zeroth-order optimization, EAs can even outperform RLs on some tasks. In order to solve multi-agent problems by EA, a value decomposition method is used to decompose the team reward. Our method is parallel on multiple cores, which can speed up our algorithm significantly. We test our algorithm on two benchmarking environments, and the experiment results show that our algorithm is better than traditional RL and other representative gradient-free methods. |
format | Online Article Text |
id | pubmed-7354799 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
record_format | MEDLINE/PubMed |
spelling | pubmed-73547992020-07-13 A Parallel Evolutionary Algorithm with Value Decomposition for Multi-agent Problems Li, Gao Duan, Qiqi Shi, Yuhui Advances in Swarm Intelligence Article Many real-world problems involve cooperation and/or competition among multiple agents. These problems often can be formulated as multi-agent problems. Recently, Reinforcement Learning (RL) has made significant progress on single-agent problems. However, multi-agent problems still cannot be easily solved by traditional RL algorithms. First, the multi-agent environment is considered as a non-stationary system. Second, most multi-agent environments only provide a shared team reward as feedback. As a result, agents may not be able to learn proper cooperative or competitive behaviors by traditional RL. Our algorithm adopts Evolution Strategies (ES) for optimizing policy which is used to control agents and a value decomposition method for estimating proper fitness for each policy. Evolutionary Algorithm is considered as a promising alternative for signal-agent problems. Owing to its simplicity, scalability, and efficiency on zeroth-order optimization, EAs can even outperform RLs on some tasks. In order to solve multi-agent problems by EA, a value decomposition method is used to decompose the team reward. Our method is parallel on multiple cores, which can speed up our algorithm significantly. We test our algorithm on two benchmarking environments, and the experiment results show that our algorithm is better than traditional RL and other representative gradient-free methods. 2020-06-22 /pmc/articles/PMC7354799/ http://dx.doi.org/10.1007/978-3-030-53956-6_57 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Li, Gao Duan, Qiqi Shi, Yuhui A Parallel Evolutionary Algorithm with Value Decomposition for Multi-agent Problems |
title | A Parallel Evolutionary Algorithm with Value Decomposition for Multi-agent Problems |
title_full | A Parallel Evolutionary Algorithm with Value Decomposition for Multi-agent Problems |
title_fullStr | A Parallel Evolutionary Algorithm with Value Decomposition for Multi-agent Problems |
title_full_unstemmed | A Parallel Evolutionary Algorithm with Value Decomposition for Multi-agent Problems |
title_short | A Parallel Evolutionary Algorithm with Value Decomposition for Multi-agent Problems |
title_sort | parallel evolutionary algorithm with value decomposition for multi-agent problems |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7354799/ http://dx.doi.org/10.1007/978-3-030-53956-6_57 |
work_keys_str_mv | AT ligao aparallelevolutionaryalgorithmwithvaluedecompositionformultiagentproblems AT duanqiqi aparallelevolutionaryalgorithmwithvaluedecompositionformultiagentproblems AT shiyuhui aparallelevolutionaryalgorithmwithvaluedecompositionformultiagentproblems AT ligao parallelevolutionaryalgorithmwithvaluedecompositionformultiagentproblems AT duanqiqi parallelevolutionaryalgorithmwithvaluedecompositionformultiagentproblems AT shiyuhui parallelevolutionaryalgorithmwithvaluedecompositionformultiagentproblems |