Cargando…

Learning Attentional Communication with a Common Network for Multiagent Reinforcement Learning

For multiagent communication and cooperation tasks in partially observable environments, most of the existing works only use the information contained in hidden layers of a network at the current moment, limiting the source of information. In this paper, we propose a novel algorithm named multiagent...

Descripción completa

Detalles Bibliográficos
Autores principales: Yu, Wenwu, Wang, Rui, Hu, Xiaohui
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10322483/
https://www.ncbi.nlm.nih.gov/pubmed/37416594
http://dx.doi.org/10.1155/2023/5814420
Descripción
Sumario:For multiagent communication and cooperation tasks in partially observable environments, most of the existing works only use the information contained in hidden layers of a network at the current moment, limiting the source of information. In this paper, we propose a novel algorithm named multiagent attentional communication with the common network (MAACCN), which adds a consensus information module to expand the source of communication information. We regard the best-performing overall network in the historical moment for agents as the common network, and we extract consensus knowledge by leveraging such a network. Especially, we combine current observation information with the consensus knowledge to infer more effective information as input for decision-making through the attention mechanism. Experiments conducted on the StarCraft multiagent challenge (SMAC) demonstrate the effectiveness of MAACCN in comparison to a set of baselines and also reveal that MAACCN can improve performance by more than 20% in a super hard scenario especially.