Cargando…
Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning
Artificial neural networks (ANNs) have been successfully trained to perform a wide range of sensory-motor behaviors. In contrast, the performance of spiking neuronal network (SNN) models trained to perform similar behaviors remains relatively suboptimal. In this work, we aimed to push the field of S...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9563231/ https://www.ncbi.nlm.nih.gov/pubmed/36249482 http://dx.doi.org/10.3389/fncom.2022.1017284 |
_version_ | 1784808351783190528 |
---|---|
author | Haşegan, Daniel Deible, Matt Earl, Christopher D’Onofrio, David Hazan, Hananel Anwar, Haroon Neymotin, Samuel A. |
author_facet | Haşegan, Daniel Deible, Matt Earl, Christopher D’Onofrio, David Hazan, Hananel Anwar, Haroon Neymotin, Samuel A. |
author_sort | Haşegan, Daniel |
collection | PubMed |
description | Artificial neural networks (ANNs) have been successfully trained to perform a wide range of sensory-motor behaviors. In contrast, the performance of spiking neuronal network (SNN) models trained to perform similar behaviors remains relatively suboptimal. In this work, we aimed to push the field of SNNs forward by exploring the potential of different learning mechanisms to achieve optimal performance. We trained SNNs to solve the CartPole reinforcement learning (RL) control problem using two learning mechanisms operating at different timescales: (1) spike-timing-dependent reinforcement learning (STDP-RL) and (2) evolutionary strategy (EVOL). Though the role of STDP-RL in biological systems is well established, several other mechanisms, though not fully understood, work in concert during learning in vivo. Recreating accurate models that capture the interaction of STDP-RL with these diverse learning mechanisms is extremely difficult. EVOL is an alternative method and has been successfully used in many studies to fit model neural responsiveness to electrophysiological recordings and, in some cases, for classification problems. One advantage of EVOL is that it may not need to capture all interacting components of synaptic plasticity and thus provides a better alternative to STDP-RL. Here, we compared the performance of each algorithm after training, which revealed EVOL as a powerful method for training SNNs to perform sensory-motor behaviors. Our modeling opens up new capabilities for SNNs in RL and could serve as a testbed for neurobiologists aiming to understand multi-timescale learning mechanisms and dynamics in neuronal circuits. |
format | Online Article Text |
id | pubmed-9563231 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-95632312022-10-15 Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning Haşegan, Daniel Deible, Matt Earl, Christopher D’Onofrio, David Hazan, Hananel Anwar, Haroon Neymotin, Samuel A. Front Comput Neurosci Neuroscience Artificial neural networks (ANNs) have been successfully trained to perform a wide range of sensory-motor behaviors. In contrast, the performance of spiking neuronal network (SNN) models trained to perform similar behaviors remains relatively suboptimal. In this work, we aimed to push the field of SNNs forward by exploring the potential of different learning mechanisms to achieve optimal performance. We trained SNNs to solve the CartPole reinforcement learning (RL) control problem using two learning mechanisms operating at different timescales: (1) spike-timing-dependent reinforcement learning (STDP-RL) and (2) evolutionary strategy (EVOL). Though the role of STDP-RL in biological systems is well established, several other mechanisms, though not fully understood, work in concert during learning in vivo. Recreating accurate models that capture the interaction of STDP-RL with these diverse learning mechanisms is extremely difficult. EVOL is an alternative method and has been successfully used in many studies to fit model neural responsiveness to electrophysiological recordings and, in some cases, for classification problems. One advantage of EVOL is that it may not need to capture all interacting components of synaptic plasticity and thus provides a better alternative to STDP-RL. Here, we compared the performance of each algorithm after training, which revealed EVOL as a powerful method for training SNNs to perform sensory-motor behaviors. Our modeling opens up new capabilities for SNNs in RL and could serve as a testbed for neurobiologists aiming to understand multi-timescale learning mechanisms and dynamics in neuronal circuits. Frontiers Media S.A. 2022-09-30 /pmc/articles/PMC9563231/ /pubmed/36249482 http://dx.doi.org/10.3389/fncom.2022.1017284 Text en Copyright © 2022 Haşegan, Deible, Earl, D’Onofrio, Hazan, Anwar and Neymotin. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Haşegan, Daniel Deible, Matt Earl, Christopher D’Onofrio, David Hazan, Hananel Anwar, Haroon Neymotin, Samuel A. Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning |
title | Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning |
title_full | Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning |
title_fullStr | Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning |
title_full_unstemmed | Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning |
title_short | Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning |
title_sort | training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9563231/ https://www.ncbi.nlm.nih.gov/pubmed/36249482 http://dx.doi.org/10.3389/fncom.2022.1017284 |
work_keys_str_mv | AT hasegandaniel trainingspikingneuronalnetworkstoperformmotorcontrolusingreinforcementandevolutionarylearning AT deiblematt trainingspikingneuronalnetworkstoperformmotorcontrolusingreinforcementandevolutionarylearning AT earlchristopher trainingspikingneuronalnetworkstoperformmotorcontrolusingreinforcementandevolutionarylearning AT donofriodavid trainingspikingneuronalnetworkstoperformmotorcontrolusingreinforcementandevolutionarylearning AT hazanhananel trainingspikingneuronalnetworkstoperformmotorcontrolusingreinforcementandevolutionarylearning AT anwarharoon trainingspikingneuronalnetworkstoperformmotorcontrolusingreinforcementandevolutionarylearning AT neymotinsamuela trainingspikingneuronalnetworkstoperformmotorcontrolusingreinforcementandevolutionarylearning |