Cargando…
Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning
In reinforcement learning, the epsilon (ε)-greedy strategy is commonly employed as an exploration technique This method, however, leads to extensive initial exploration and prolonged learning periods. Existing approaches to mitigate this issue involve constraining the exploration range using expert...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490608/ https://www.ncbi.nlm.nih.gov/pubmed/37687867 http://dx.doi.org/10.3390/s23177411 |
_version_ | 1785103879192444928 |
---|---|
author | Kyoung, Dohyun Sung, Yunsick |
author_facet | Kyoung, Dohyun Sung, Yunsick |
author_sort | Kyoung, Dohyun |
collection | PubMed |
description | In reinforcement learning, the epsilon (ε)-greedy strategy is commonly employed as an exploration technique This method, however, leads to extensive initial exploration and prolonged learning periods. Existing approaches to mitigate this issue involve constraining the exploration range using expert data or utilizing pretrained models. Nevertheless, these methods do not effectively reduce the initial exploration range, as the exploration by the agent is limited to states adjacent to those included in the expert data. This paper proposes a method to reduce the initial exploration range in reinforcement learning through a pretrained transformer decoder on expert data. The proposed method involves pretraining a transformer decoder with massive expert data to guide the agent’s actions during the early learning stages. After achieving a certain learning threshold, the actions are determined using the epsilon-greedy strategy. An experiment was conducted in the basketball game FreeStyle1 to compare the proposed method with the traditional Deep Q-Network (DQN) using the epsilon-greedy strategy. The results indicated that the proposed method yielded approximately 2.5 times the average reward and a 26% higher win rate, proving its enhanced performance in reducing exploration range and optimizing learning times. This innovative method presents a significant improvement over traditional exploration techniques in reinforcement learning. |
format | Online Article Text |
id | pubmed-10490608 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-104906082023-09-09 Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning Kyoung, Dohyun Sung, Yunsick Sensors (Basel) Article In reinforcement learning, the epsilon (ε)-greedy strategy is commonly employed as an exploration technique This method, however, leads to extensive initial exploration and prolonged learning periods. Existing approaches to mitigate this issue involve constraining the exploration range using expert data or utilizing pretrained models. Nevertheless, these methods do not effectively reduce the initial exploration range, as the exploration by the agent is limited to states adjacent to those included in the expert data. This paper proposes a method to reduce the initial exploration range in reinforcement learning through a pretrained transformer decoder on expert data. The proposed method involves pretraining a transformer decoder with massive expert data to guide the agent’s actions during the early learning stages. After achieving a certain learning threshold, the actions are determined using the epsilon-greedy strategy. An experiment was conducted in the basketball game FreeStyle1 to compare the proposed method with the traditional Deep Q-Network (DQN) using the epsilon-greedy strategy. The results indicated that the proposed method yielded approximately 2.5 times the average reward and a 26% higher win rate, proving its enhanced performance in reducing exploration range and optimizing learning times. This innovative method presents a significant improvement over traditional exploration techniques in reinforcement learning. MDPI 2023-08-25 /pmc/articles/PMC10490608/ /pubmed/37687867 http://dx.doi.org/10.3390/s23177411 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Kyoung, Dohyun Sung, Yunsick Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning |
title | Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning |
title_full | Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning |
title_fullStr | Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning |
title_full_unstemmed | Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning |
title_short | Transformer Decoder-Based Enhanced Exploration Method to Alleviate Initial Exploration Problems in Reinforcement Learning |
title_sort | transformer decoder-based enhanced exploration method to alleviate initial exploration problems in reinforcement learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10490608/ https://www.ncbi.nlm.nih.gov/pubmed/37687867 http://dx.doi.org/10.3390/s23177411 |
work_keys_str_mv | AT kyoungdohyun transformerdecoderbasedenhancedexplorationmethodtoalleviateinitialexplorationproblemsinreinforcementlearning AT sungyunsick transformerdecoderbasedenhancedexplorationmethodtoalleviateinitialexplorationproblemsinreinforcementlearning |