Cargando…
Category learning in a recurrent neural network with reinforcement learning
It is known that humans and animals can learn and utilize category information quickly and efficiently to adapt to changing environments, and several brain areas are involved in learning and encoding category information. However, it is unclear that how the brain system learns and forms categorical...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9640766/ https://www.ncbi.nlm.nih.gov/pubmed/36387007 http://dx.doi.org/10.3389/fpsyt.2022.1008011 |
_version_ | 1784825934027685888 |
---|---|
author | Zhang, Ying Pan, Xiaochuan Wang, Yihong |
author_facet | Zhang, Ying Pan, Xiaochuan Wang, Yihong |
author_sort | Zhang, Ying |
collection | PubMed |
description | It is known that humans and animals can learn and utilize category information quickly and efficiently to adapt to changing environments, and several brain areas are involved in learning and encoding category information. However, it is unclear that how the brain system learns and forms categorical representations from the view of neural circuits. In order to investigate this issue from the network level, we combine a recurrent neural network with reinforcement learning to construct a deep reinforcement learning model to demonstrate how the category is learned and represented in the network. The model consists of a policy network and a value network. The policy network is responsible for updating the policy to choose actions, while the value network is responsible for evaluating the action to predict rewards. The agent learns dynamically through the information interaction between the policy network and the value network. This model was trained to learn six stimulus-stimulus associative chains in a sequential paired-association task that was learned by the monkey. The simulated results demonstrated that our model was able to learn the stimulus-stimulus associative chains, and successfully reproduced the similar behavior of the monkey performing the same task. Two types of neurons were found in this model: one type primarily encoded identity information about individual stimuli; the other type mainly encoded category information of associated stimuli in one chain. The two types of activity-patterns were also observed in the primate prefrontal cortex after the monkey learned the same task. Furthermore, the ability of these two types of neurons to encode stimulus or category information was enhanced during this model was learning the task. Our results suggest that the neurons in the recurrent neural network have the ability to form categorical representations through deep reinforcement learning during learning stimulus-stimulus associations. It might provide a new approach for understanding neuronal mechanisms underlying how the prefrontal cortex learns and encodes category information. |
format | Online Article Text |
id | pubmed-9640766 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-96407662022-11-15 Category learning in a recurrent neural network with reinforcement learning Zhang, Ying Pan, Xiaochuan Wang, Yihong Front Psychiatry Psychiatry It is known that humans and animals can learn and utilize category information quickly and efficiently to adapt to changing environments, and several brain areas are involved in learning and encoding category information. However, it is unclear that how the brain system learns and forms categorical representations from the view of neural circuits. In order to investigate this issue from the network level, we combine a recurrent neural network with reinforcement learning to construct a deep reinforcement learning model to demonstrate how the category is learned and represented in the network. The model consists of a policy network and a value network. The policy network is responsible for updating the policy to choose actions, while the value network is responsible for evaluating the action to predict rewards. The agent learns dynamically through the information interaction between the policy network and the value network. This model was trained to learn six stimulus-stimulus associative chains in a sequential paired-association task that was learned by the monkey. The simulated results demonstrated that our model was able to learn the stimulus-stimulus associative chains, and successfully reproduced the similar behavior of the monkey performing the same task. Two types of neurons were found in this model: one type primarily encoded identity information about individual stimuli; the other type mainly encoded category information of associated stimuli in one chain. The two types of activity-patterns were also observed in the primate prefrontal cortex after the monkey learned the same task. Furthermore, the ability of these two types of neurons to encode stimulus or category information was enhanced during this model was learning the task. Our results suggest that the neurons in the recurrent neural network have the ability to form categorical representations through deep reinforcement learning during learning stimulus-stimulus associations. It might provide a new approach for understanding neuronal mechanisms underlying how the prefrontal cortex learns and encodes category information. Frontiers Media S.A. 2022-10-25 /pmc/articles/PMC9640766/ /pubmed/36387007 http://dx.doi.org/10.3389/fpsyt.2022.1008011 Text en Copyright © 2022 Zhang, Pan and Wang. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychiatry Zhang, Ying Pan, Xiaochuan Wang, Yihong Category learning in a recurrent neural network with reinforcement learning |
title | Category learning in a recurrent neural network with reinforcement learning |
title_full | Category learning in a recurrent neural network with reinforcement learning |
title_fullStr | Category learning in a recurrent neural network with reinforcement learning |
title_full_unstemmed | Category learning in a recurrent neural network with reinforcement learning |
title_short | Category learning in a recurrent neural network with reinforcement learning |
title_sort | category learning in a recurrent neural network with reinforcement learning |
topic | Psychiatry |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9640766/ https://www.ncbi.nlm.nih.gov/pubmed/36387007 http://dx.doi.org/10.3389/fpsyt.2022.1008011 |
work_keys_str_mv | AT zhangying categorylearninginarecurrentneuralnetworkwithreinforcementlearning AT panxiaochuan categorylearninginarecurrentneuralnetworkwithreinforcementlearning AT wangyihong categorylearninginarecurrentneuralnetworkwithreinforcementlearning |