Cargando…
KnowRU: Knowledge Reuse via Knowledge Distillation in Multi-Agent Reinforcement Learning
Recently, deep reinforcement learning (RL) algorithms have achieved significant progress in the multi-agent domain. However, training for increasingly complex tasks would be time-consuming and resource intensive. To alleviate this problem, efficient leveraging of historical experience is essential,...
Autores principales: | Gao, Zijian, Xu, Kele, Ding, Bo, Wang, Huaimin |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8393270/ https://www.ncbi.nlm.nih.gov/pubmed/34441184 http://dx.doi.org/10.3390/e23081043 |
Ejemplares similares
-
Knowledge Reuse of Multi-Agent Reinforcement Learning in Cooperative Tasks
por: Shi, Daming, et al.
Publicado: (2022) -
Knowledge Fusion Distillation: Improving Distillation with Multi-scale Attention Mechanisms
por: Li, Linfeng, et al.
Publicado: (2023) -
Knowledge distillation in deep learning and its applications
por: Alkhulaifi, Abdolmaged, et al.
Publicado: (2021) -
Knowledge distillation based on multi-layer fusion features
por: Tan, Shengyuan, et al.
Publicado: (2023) -
Communication-efficient federated learning via knowledge distillation
por: Wu, Chuhan, et al.
Publicado: (2022)