Cargando…

Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation

Development of distributed Multi-Agent Reinforcement Learning (MARL) algorithms has attracted an increasing surge of interest lately. Generally speaking, conventional Model-Based (MB) or Model-Free (MF) RL algorithms are not directly applicable to the MARL problems due to utilization of a fixed rewa...

Descripción completa

Detalles Bibliográficos
Autores principales: Salimibeni, Mohammad, Mohammadi, Arash, Malekzadeh, Parvin, Plataniotis, Konstantinos N.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8962978/
https://www.ncbi.nlm.nih.gov/pubmed/35214293
http://dx.doi.org/10.3390/s22041393
_version_ 1784677892157865984
author Salimibeni, Mohammad
Mohammadi, Arash
Malekzadeh, Parvin
Plataniotis, Konstantinos N.
author_facet Salimibeni, Mohammad
Mohammadi, Arash
Malekzadeh, Parvin
Plataniotis, Konstantinos N.
author_sort Salimibeni, Mohammad
collection PubMed
description Development of distributed Multi-Agent Reinforcement Learning (MARL) algorithms has attracted an increasing surge of interest lately. Generally speaking, conventional Model-Based (MB) or Model-Free (MF) RL algorithms are not directly applicable to the MARL problems due to utilization of a fixed reward model for learning the underlying value function. While Deep Neural Network (DNN)-based solutions perform well, they are still prone to overfitting, high sensitivity to parameter selection, and sample inefficiency. In this paper, an adaptive Kalman Filter (KF)-based framework is introduced as an efficient alternative to address the aforementioned problems by capitalizing on unique characteristics of KF such as uncertainty modeling and online second order learning. More specifically, the paper proposes the Multi-Agent Adaptive Kalman Temporal Difference (MAK-TD) framework and its Successor Representation-based variant, referred to as the MAK-SR. The proposed MAK-TD/SR frameworks consider the continuous nature of the action-space that is associated with high dimensional multi-agent environments and exploit Kalman Temporal Difference (KTD) to address the parameter uncertainty. The proposed MAK-TD/SR frameworks are evaluated via several experiments, which are implemented through the OpenAI Gym MARL benchmarks. In these experiments, different number of agents in cooperative, competitive, and mixed (cooperative-competitive) scenarios are utilized. The experimental results illustrate superior performance of the proposed MAK-TD/SR frameworks compared to their state-of-the-art counterparts.
format Online
Article
Text
id pubmed-8962978
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-89629782022-03-30 Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation Salimibeni, Mohammad Mohammadi, Arash Malekzadeh, Parvin Plataniotis, Konstantinos N. Sensors (Basel) Article Development of distributed Multi-Agent Reinforcement Learning (MARL) algorithms has attracted an increasing surge of interest lately. Generally speaking, conventional Model-Based (MB) or Model-Free (MF) RL algorithms are not directly applicable to the MARL problems due to utilization of a fixed reward model for learning the underlying value function. While Deep Neural Network (DNN)-based solutions perform well, they are still prone to overfitting, high sensitivity to parameter selection, and sample inefficiency. In this paper, an adaptive Kalman Filter (KF)-based framework is introduced as an efficient alternative to address the aforementioned problems by capitalizing on unique characteristics of KF such as uncertainty modeling and online second order learning. More specifically, the paper proposes the Multi-Agent Adaptive Kalman Temporal Difference (MAK-TD) framework and its Successor Representation-based variant, referred to as the MAK-SR. The proposed MAK-TD/SR frameworks consider the continuous nature of the action-space that is associated with high dimensional multi-agent environments and exploit Kalman Temporal Difference (KTD) to address the parameter uncertainty. The proposed MAK-TD/SR frameworks are evaluated via several experiments, which are implemented through the OpenAI Gym MARL benchmarks. In these experiments, different number of agents in cooperative, competitive, and mixed (cooperative-competitive) scenarios are utilized. The experimental results illustrate superior performance of the proposed MAK-TD/SR frameworks compared to their state-of-the-art counterparts. MDPI 2022-02-11 /pmc/articles/PMC8962978/ /pubmed/35214293 http://dx.doi.org/10.3390/s22041393 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Salimibeni, Mohammad
Mohammadi, Arash
Malekzadeh, Parvin
Plataniotis, Konstantinos N.
Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation
title Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation
title_full Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation
title_fullStr Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation
title_full_unstemmed Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation
title_short Multi-Agent Reinforcement Learning via Adaptive Kalman Temporal Difference and Successor Representation
title_sort multi-agent reinforcement learning via adaptive kalman temporal difference and successor representation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8962978/
https://www.ncbi.nlm.nih.gov/pubmed/35214293
http://dx.doi.org/10.3390/s22041393
work_keys_str_mv AT salimibenimohammad multiagentreinforcementlearningviaadaptivekalmantemporaldifferenceandsuccessorrepresentation
AT mohammadiarash multiagentreinforcementlearningviaadaptivekalmantemporaldifferenceandsuccessorrepresentation
AT malekzadehparvin multiagentreinforcementlearningviaadaptivekalmantemporaldifferenceandsuccessorrepresentation
AT plataniotiskonstantinosn multiagentreinforcementlearningviaadaptivekalmantemporaldifferenceandsuccessorrepresentation