Cargando…
Exploration for Countering the Episodic Memory
Reinforcement learning is a prominent computational approach for goal-directed learning and decision making, and exploration plays an important role in improving the agent's performance in reinforcement learning. In low-dimensional Markov decision processes, table reinforcement learning incorpo...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8995543/ https://www.ncbi.nlm.nih.gov/pubmed/35419049 http://dx.doi.org/10.1155/2022/7286186 |
_version_ | 1784684322054209536 |
---|---|
author | Zhou, Rong Wang, Yuan Zhang, Xiwen Wang, Chao |
author_facet | Zhou, Rong Wang, Yuan Zhang, Xiwen Wang, Chao |
author_sort | Zhou, Rong |
collection | PubMed |
description | Reinforcement learning is a prominent computational approach for goal-directed learning and decision making, and exploration plays an important role in improving the agent's performance in reinforcement learning. In low-dimensional Markov decision processes, table reinforcement learning incorporated within count-based exploration works well for states of the Markov decision processes that can be easily exhausted. It is generally accepted that count-based exploration strategies turn inefficient when applied to high-dimensional Markov decision processes (generally high-dimensional state spaces, continuous action spaces, or both) since most states occur only once in deep reinforcement learning. Exploration methods widely applied in deep reinforcement learning rely on heuristic intrinsic motivation to explore unseen states or unreached parts of one state. The episodic memory module simulates the performance of hippocampus in human brain. This is exactly the memory of past experience. It seems logical to use episodic memory to count the situations encountered. Therefore, we use the contextual memory module to remember the states that the agent has encountered, as a count of states, and the purpose of exploration is to reduce the probability of encountering these states again. The purpose of exploration is to counter the episodic memory. In this article, we try to take advantage of the episodic memory module to estimate the number of states experienced, so as to counter the episodic memory. We conducted experiments on the OpenAI platform and found that counting accuracy of state is higher than that of the CTS model. At the same time, this method is used in high-dimensional object detection and tracking, also achieving good results. |
format | Online Article Text |
id | pubmed-8995543 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-89955432022-04-12 Exploration for Countering the Episodic Memory Zhou, Rong Wang, Yuan Zhang, Xiwen Wang, Chao Comput Intell Neurosci Research Article Reinforcement learning is a prominent computational approach for goal-directed learning and decision making, and exploration plays an important role in improving the agent's performance in reinforcement learning. In low-dimensional Markov decision processes, table reinforcement learning incorporated within count-based exploration works well for states of the Markov decision processes that can be easily exhausted. It is generally accepted that count-based exploration strategies turn inefficient when applied to high-dimensional Markov decision processes (generally high-dimensional state spaces, continuous action spaces, or both) since most states occur only once in deep reinforcement learning. Exploration methods widely applied in deep reinforcement learning rely on heuristic intrinsic motivation to explore unseen states or unreached parts of one state. The episodic memory module simulates the performance of hippocampus in human brain. This is exactly the memory of past experience. It seems logical to use episodic memory to count the situations encountered. Therefore, we use the contextual memory module to remember the states that the agent has encountered, as a count of states, and the purpose of exploration is to reduce the probability of encountering these states again. The purpose of exploration is to counter the episodic memory. In this article, we try to take advantage of the episodic memory module to estimate the number of states experienced, so as to counter the episodic memory. We conducted experiments on the OpenAI platform and found that counting accuracy of state is higher than that of the CTS model. At the same time, this method is used in high-dimensional object detection and tracking, also achieving good results. Hindawi 2022-03-24 /pmc/articles/PMC8995543/ /pubmed/35419049 http://dx.doi.org/10.1155/2022/7286186 Text en Copyright © 2022 Rong Zhou et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Zhou, Rong Wang, Yuan Zhang, Xiwen Wang, Chao Exploration for Countering the Episodic Memory |
title | Exploration for Countering the Episodic Memory |
title_full | Exploration for Countering the Episodic Memory |
title_fullStr | Exploration for Countering the Episodic Memory |
title_full_unstemmed | Exploration for Countering the Episodic Memory |
title_short | Exploration for Countering the Episodic Memory |
title_sort | exploration for countering the episodic memory |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8995543/ https://www.ncbi.nlm.nih.gov/pubmed/35419049 http://dx.doi.org/10.1155/2022/7286186 |
work_keys_str_mv | AT zhourong explorationforcounteringtheepisodicmemory AT wangyuan explorationforcounteringtheepisodicmemory AT zhangxiwen explorationforcounteringtheepisodicmemory AT wangchao explorationforcounteringtheepisodicmemory |