Cargando…

Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks

Liquid State Machines (LSMs) are computing reservoirs composed of recurrently connected Spiking Neural Networks which have attracted research interest for their modeling capacity of biological structures and as promising pattern recognition tools suitable for their implementation in neuromorphic pro...

Descripción completa

Detalles Bibliográficos
Autores principales: Patiño-Saucedo, Alberto, Rostro-González, Horacio, Serrano-Gotarredona, Teresa, Linares-Barranco, Bernabé
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8964061/
https://www.ncbi.nlm.nih.gov/pubmed/35360182
http://dx.doi.org/10.3389/fnins.2022.819063
_version_ 1784678126772551680
author Patiño-Saucedo, Alberto
Rostro-González, Horacio
Serrano-Gotarredona, Teresa
Linares-Barranco, Bernabé
author_facet Patiño-Saucedo, Alberto
Rostro-González, Horacio
Serrano-Gotarredona, Teresa
Linares-Barranco, Bernabé
author_sort Patiño-Saucedo, Alberto
collection PubMed
description Liquid State Machines (LSMs) are computing reservoirs composed of recurrently connected Spiking Neural Networks which have attracted research interest for their modeling capacity of biological structures and as promising pattern recognition tools suitable for their implementation in neuromorphic processors, benefited from the modest use of computing resources in their training process. However, it has been difficult to optimize LSMs for solving complex tasks such as event-based computer vision and few implementations in large-scale neuromorphic processors have been attempted. In this work, we show that offline-trained LSMs implemented in the SpiNNaker neuromorphic processor are able to classify visual events, achieving state-of-the-art performance in the event-based N-MNIST dataset. The training of the readout layer is performed using a recent adaptation of back-propagation-through-time (BPTT) for SNNs, while the internal weights of the reservoir are kept static. Results show that mapping our LSM from a Deep Learning framework to SpiNNaker does not affect the performance of the classification task. Additionally, we show that weight quantization, which substantially reduces the memory footprint of the LSM, has a small impact on its performance.
format Online
Article
Text
id pubmed-8964061
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-89640612022-03-30 Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks Patiño-Saucedo, Alberto Rostro-González, Horacio Serrano-Gotarredona, Teresa Linares-Barranco, Bernabé Front Neurosci Neuroscience Liquid State Machines (LSMs) are computing reservoirs composed of recurrently connected Spiking Neural Networks which have attracted research interest for their modeling capacity of biological structures and as promising pattern recognition tools suitable for their implementation in neuromorphic processors, benefited from the modest use of computing resources in their training process. However, it has been difficult to optimize LSMs for solving complex tasks such as event-based computer vision and few implementations in large-scale neuromorphic processors have been attempted. In this work, we show that offline-trained LSMs implemented in the SpiNNaker neuromorphic processor are able to classify visual events, achieving state-of-the-art performance in the event-based N-MNIST dataset. The training of the readout layer is performed using a recent adaptation of back-propagation-through-time (BPTT) for SNNs, while the internal weights of the reservoir are kept static. Results show that mapping our LSM from a Deep Learning framework to SpiNNaker does not affect the performance of the classification task. Additionally, we show that weight quantization, which substantially reduces the memory footprint of the LSM, has a small impact on its performance. Frontiers Media S.A. 2022-03-14 /pmc/articles/PMC8964061/ /pubmed/35360182 http://dx.doi.org/10.3389/fnins.2022.819063 Text en Copyright © 2022 Patiño-Saucedo, Rostro-González, Serrano-Gotarredona and Linares-Barranco. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Patiño-Saucedo, Alberto
Rostro-González, Horacio
Serrano-Gotarredona, Teresa
Linares-Barranco, Bernabé
Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks
title Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks
title_full Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks
title_fullStr Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks
title_full_unstemmed Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks
title_short Liquid State Machine on SpiNNaker for Spatio-Temporal Classification Tasks
title_sort liquid state machine on spinnaker for spatio-temporal classification tasks
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8964061/
https://www.ncbi.nlm.nih.gov/pubmed/35360182
http://dx.doi.org/10.3389/fnins.2022.819063
work_keys_str_mv AT patinosaucedoalberto liquidstatemachineonspinnakerforspatiotemporalclassificationtasks
AT rostrogonzalezhoracio liquidstatemachineonspinnakerforspatiotemporalclassificationtasks
AT serranogotarredonateresa liquidstatemachineonspinnakerforspatiotemporalclassificationtasks
AT linaresbarrancobernabe liquidstatemachineonspinnakerforspatiotemporalclassificationtasks