Cargando…
Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning
Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on futur...
Autores principales: | , , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9094569/ https://www.ncbi.nlm.nih.gov/pubmed/35544518 http://dx.doi.org/10.1371/journal.pone.0265808 |
_version_ | 1784705568189972480 |
---|---|
author | Anwar, Haroon Caby, Simon Dura-Bernal, Salvador D’Onofrio, David Hasegan, Daniel Deible, Matt Grunblatt, Sara Chadderdon, George L. Kerr, Cliff C. Lakatos, Peter Lytton, William W. Hazan, Hananel Neymotin, Samuel A. |
author_facet | Anwar, Haroon Caby, Simon Dura-Bernal, Salvador D’Onofrio, David Hasegan, Daniel Deible, Matt Grunblatt, Sara Chadderdon, George L. Kerr, Cliff C. Lakatos, Peter Lytton, William W. Hazan, Hananel Neymotin, Samuel A. |
author_sort | Anwar, Haroon |
collection | PubMed |
description | Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on future states and adjust their behavior accordingly. The models using these learning rules are often treated as black boxes, with little analysis on circuit architectures and learning mechanisms supporting optimal performance. Here we developed visual/motor spiking neuronal network models and trained them to play a virtual racket-ball game using several reinforcement learning algorithms inspired by the dopaminergic reward system. We systematically investigated how different architectures and circuit-motifs (feed-forward, recurrent, feedback) contributed to learning and performance. We also developed a new biologically-inspired learning rule that significantly enhanced performance, while reducing training time. Our models included visual areas encoding game inputs and relaying the information to motor areas, which used this information to learn to move the racket to hit the ball. Neurons in the early visual area relayed information encoding object location and motion direction across the network. Neuronal association areas encoded spatial relationships between objects in the visual scene. Motor populations received inputs from visual and association areas representing the dorsal pathway. Two populations of motor neurons generated commands to move the racket up or down. Model-generated actions updated the environment and triggered reward or punishment signals that adjusted synaptic weights so that the models could learn which actions led to reward. Here we demonstrate that our biologically-plausible learning rules were effective in training spiking neuronal network models to solve problems in dynamic environments. We used our models to dissect the circuit architectures and learning rules most effective for learning. Our model shows that learning mechanisms involving different neural circuits produce similar performance in sensory-motor tasks. In biological networks, all learning mechanisms may complement one another, accelerating the learning capabilities of animals. Furthermore, this also highlights the resilience and redundancy in biological systems. |
format | Online Article Text |
id | pubmed-9094569 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-90945692022-05-12 Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning Anwar, Haroon Caby, Simon Dura-Bernal, Salvador D’Onofrio, David Hasegan, Daniel Deible, Matt Grunblatt, Sara Chadderdon, George L. Kerr, Cliff C. Lakatos, Peter Lytton, William W. Hazan, Hananel Neymotin, Samuel A. PLoS One Research Article Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on future states and adjust their behavior accordingly. The models using these learning rules are often treated as black boxes, with little analysis on circuit architectures and learning mechanisms supporting optimal performance. Here we developed visual/motor spiking neuronal network models and trained them to play a virtual racket-ball game using several reinforcement learning algorithms inspired by the dopaminergic reward system. We systematically investigated how different architectures and circuit-motifs (feed-forward, recurrent, feedback) contributed to learning and performance. We also developed a new biologically-inspired learning rule that significantly enhanced performance, while reducing training time. Our models included visual areas encoding game inputs and relaying the information to motor areas, which used this information to learn to move the racket to hit the ball. Neurons in the early visual area relayed information encoding object location and motion direction across the network. Neuronal association areas encoded spatial relationships between objects in the visual scene. Motor populations received inputs from visual and association areas representing the dorsal pathway. Two populations of motor neurons generated commands to move the racket up or down. Model-generated actions updated the environment and triggered reward or punishment signals that adjusted synaptic weights so that the models could learn which actions led to reward. Here we demonstrate that our biologically-plausible learning rules were effective in training spiking neuronal network models to solve problems in dynamic environments. We used our models to dissect the circuit architectures and learning rules most effective for learning. Our model shows that learning mechanisms involving different neural circuits produce similar performance in sensory-motor tasks. In biological networks, all learning mechanisms may complement one another, accelerating the learning capabilities of animals. Furthermore, this also highlights the resilience and redundancy in biological systems. Public Library of Science 2022-05-11 /pmc/articles/PMC9094569/ /pubmed/35544518 http://dx.doi.org/10.1371/journal.pone.0265808 Text en © 2022 Anwar et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Anwar, Haroon Caby, Simon Dura-Bernal, Salvador D’Onofrio, David Hasegan, Daniel Deible, Matt Grunblatt, Sara Chadderdon, George L. Kerr, Cliff C. Lakatos, Peter Lytton, William W. Hazan, Hananel Neymotin, Samuel A. Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning |
title | Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning |
title_full | Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning |
title_fullStr | Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning |
title_full_unstemmed | Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning |
title_short | Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning |
title_sort | training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9094569/ https://www.ncbi.nlm.nih.gov/pubmed/35544518 http://dx.doi.org/10.1371/journal.pone.0265808 |
work_keys_str_mv | AT anwarharoon trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT cabysimon trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT durabernalsalvador trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT donofriodavid trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT hasegandaniel trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT deiblematt trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT grunblattsara trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT chadderdongeorgel trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT kerrcliffc trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT lakatospeter trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT lyttonwilliamw trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT hazanhananel trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning AT neymotinsamuela trainingaspikingneuronalnetworkmodelofvisualmotorcortextoplayavirtualracketballgameusingreinforcementlearning |