Cargando…
Towards a Broad-Persistent Advising Approach for Deep Interactive Reinforcement Learning in Robotic Environments
Deep Reinforcement Learning (DeepRL) methods have been widely used in robotics to learn about the environment and acquire behaviours autonomously. Deep Interactive Reinforcement 2 Learning (DeepIRL) includes interactive feedback from an external trainer or expert giving advice to help learners choos...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007476/ https://www.ncbi.nlm.nih.gov/pubmed/36904885 http://dx.doi.org/10.3390/s23052681 |
_version_ | 1784905530975715328 |
---|---|
author | Nguyen, Hung Son Cruz, Francisco Dazeley, Richard |
author_facet | Nguyen, Hung Son Cruz, Francisco Dazeley, Richard |
author_sort | Nguyen, Hung Son |
collection | PubMed |
description | Deep Reinforcement Learning (DeepRL) methods have been widely used in robotics to learn about the environment and acquire behaviours autonomously. Deep Interactive Reinforcement 2 Learning (DeepIRL) includes interactive feedback from an external trainer or expert giving advice to help learners choose actions to speed up the learning process. However, current research has been limited to interactions that offer actionable advice to only the current state of the agent. Additionally, the information is discarded by the agent after a single use, which causes a duplicate process at the same state for a revisit. In this paper, we present Broad-Persistent Advising (BPA), an approach that retains and reuses the processed information. It not only helps trainers give more general advice relevant to similar states instead of only the current state, but also allows the agent to speed up the learning process. We tested the proposed approach in two continuous robotic scenarios, namely a cart pole balancing task and a simulated robot navigation task. The results demonstrated that the agent’s learning speed increased, as evidenced by the rising reward points of up to 37%, while maintaining the number of interactions required for the trainer, in comparison to the DeepIRL approach. |
format | Online Article Text |
id | pubmed-10007476 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-100074762023-03-12 Towards a Broad-Persistent Advising Approach for Deep Interactive Reinforcement Learning in Robotic Environments Nguyen, Hung Son Cruz, Francisco Dazeley, Richard Sensors (Basel) Article Deep Reinforcement Learning (DeepRL) methods have been widely used in robotics to learn about the environment and acquire behaviours autonomously. Deep Interactive Reinforcement 2 Learning (DeepIRL) includes interactive feedback from an external trainer or expert giving advice to help learners choose actions to speed up the learning process. However, current research has been limited to interactions that offer actionable advice to only the current state of the agent. Additionally, the information is discarded by the agent after a single use, which causes a duplicate process at the same state for a revisit. In this paper, we present Broad-Persistent Advising (BPA), an approach that retains and reuses the processed information. It not only helps trainers give more general advice relevant to similar states instead of only the current state, but also allows the agent to speed up the learning process. We tested the proposed approach in two continuous robotic scenarios, namely a cart pole balancing task and a simulated robot navigation task. The results demonstrated that the agent’s learning speed increased, as evidenced by the rising reward points of up to 37%, while maintaining the number of interactions required for the trainer, in comparison to the DeepIRL approach. MDPI 2023-03-01 /pmc/articles/PMC10007476/ /pubmed/36904885 http://dx.doi.org/10.3390/s23052681 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Nguyen, Hung Son Cruz, Francisco Dazeley, Richard Towards a Broad-Persistent Advising Approach for Deep Interactive Reinforcement Learning in Robotic Environments |
title | Towards a Broad-Persistent Advising Approach for Deep Interactive Reinforcement Learning in Robotic Environments |
title_full | Towards a Broad-Persistent Advising Approach for Deep Interactive Reinforcement Learning in Robotic Environments |
title_fullStr | Towards a Broad-Persistent Advising Approach for Deep Interactive Reinforcement Learning in Robotic Environments |
title_full_unstemmed | Towards a Broad-Persistent Advising Approach for Deep Interactive Reinforcement Learning in Robotic Environments |
title_short | Towards a Broad-Persistent Advising Approach for Deep Interactive Reinforcement Learning in Robotic Environments |
title_sort | towards a broad-persistent advising approach for deep interactive reinforcement learning in robotic environments |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007476/ https://www.ncbi.nlm.nih.gov/pubmed/36904885 http://dx.doi.org/10.3390/s23052681 |
work_keys_str_mv | AT nguyenhungson towardsabroadpersistentadvisingapproachfordeepinteractivereinforcementlearninginroboticenvironments AT cruzfrancisco towardsabroadpersistentadvisingapproachfordeepinteractivereinforcementlearninginroboticenvironments AT dazeleyrichard towardsabroadpersistentadvisingapproachfordeepinteractivereinforcementlearninginroboticenvironments |