Cargando…
Reinforcement Learning With Low-Complexity Liquid State Machines
We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6718696/ https://www.ncbi.nlm.nih.gov/pubmed/31507361 http://dx.doi.org/10.3389/fnins.2019.00883 |
_version_ | 1783447775982125056 |
---|---|
author | Ponghiran, Wachirawit Srinivasan, Gopalakrishnan Roy, Kaushik |
author_facet | Ponghiran, Wachirawit Srinivasan, Gopalakrishnan Roy, Kaushik |
author_sort | Ponghiran, Wachirawit |
collection | PubMed |
description | We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear dynamics that transform the inputs into rich high-dimensional representations based on the current and past context. The random input representations can be efficiently interpreted by an output (or readout) layer with trainable parameters. Systematic initialization of the random connections and training of the readout layer using Q-learning algorithm enable such small random spiking networks to learn optimally and achieve the same learning efficiency as humans on complex reinforcement learning (RL) tasks like Atari games. In fact, the sparse recurrent connections cause these networks to retain fading memory of past inputs, thereby enabling them to perform temporal integration across successive RL time-steps and learn with partial state inputs. The spike-based approach using small random recurrent networks provides a computationally efficient alternative to state-of-the-art deep reinforcement learning networks with several layers of trainable parameters. |
format | Online Article Text |
id | pubmed-6718696 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-67186962019-09-10 Reinforcement Learning With Low-Complexity Liquid State Machines Ponghiran, Wachirawit Srinivasan, Gopalakrishnan Roy, Kaushik Front Neurosci Neuroscience We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear dynamics that transform the inputs into rich high-dimensional representations based on the current and past context. The random input representations can be efficiently interpreted by an output (or readout) layer with trainable parameters. Systematic initialization of the random connections and training of the readout layer using Q-learning algorithm enable such small random spiking networks to learn optimally and achieve the same learning efficiency as humans on complex reinforcement learning (RL) tasks like Atari games. In fact, the sparse recurrent connections cause these networks to retain fading memory of past inputs, thereby enabling them to perform temporal integration across successive RL time-steps and learn with partial state inputs. The spike-based approach using small random recurrent networks provides a computationally efficient alternative to state-of-the-art deep reinforcement learning networks with several layers of trainable parameters. Frontiers Media S.A. 2019-08-27 /pmc/articles/PMC6718696/ /pubmed/31507361 http://dx.doi.org/10.3389/fnins.2019.00883 Text en Copyright © 2019 Ponghiran, Srinivasan and Roy. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Ponghiran, Wachirawit Srinivasan, Gopalakrishnan Roy, Kaushik Reinforcement Learning With Low-Complexity Liquid State Machines |
title | Reinforcement Learning With Low-Complexity Liquid State Machines |
title_full | Reinforcement Learning With Low-Complexity Liquid State Machines |
title_fullStr | Reinforcement Learning With Low-Complexity Liquid State Machines |
title_full_unstemmed | Reinforcement Learning With Low-Complexity Liquid State Machines |
title_short | Reinforcement Learning With Low-Complexity Liquid State Machines |
title_sort | reinforcement learning with low-complexity liquid state machines |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6718696/ https://www.ncbi.nlm.nih.gov/pubmed/31507361 http://dx.doi.org/10.3389/fnins.2019.00883 |
work_keys_str_mv | AT ponghiranwachirawit reinforcementlearningwithlowcomplexityliquidstatemachines AT srinivasangopalakrishnan reinforcementlearningwithlowcomplexityliquidstatemachines AT roykaushik reinforcementlearningwithlowcomplexityliquidstatemachines |