Cargando…

SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition

In this work, we propose a Spiking Neural Network (SNN) consisting of input neurons sparsely connected by plastic synapses to a randomly interlinked liquid, referred to as Liquid-SNN, for unsupervised speech and image recognition. We adapt the strength of the synapses interconnecting the input and l...

Descripción completa

Detalles Bibliográficos
Autores principales: Srinivasan, Gopalakrishnan, Panda, Priyadarshini, Roy, Kaushik
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6116788/
https://www.ncbi.nlm.nih.gov/pubmed/30190670
http://dx.doi.org/10.3389/fnins.2018.00524
_version_ 1783351654456754176
author Srinivasan, Gopalakrishnan
Panda, Priyadarshini
Roy, Kaushik
author_facet Srinivasan, Gopalakrishnan
Panda, Priyadarshini
Roy, Kaushik
author_sort Srinivasan, Gopalakrishnan
collection PubMed
description In this work, we propose a Spiking Neural Network (SNN) consisting of input neurons sparsely connected by plastic synapses to a randomly interlinked liquid, referred to as Liquid-SNN, for unsupervised speech and image recognition. We adapt the strength of the synapses interconnecting the input and liquid using Spike Timing Dependent Plasticity (STDP), which enables the neurons to self-learn a general representation of unique classes of input patterns. The presented unsupervised learning methodology makes it possible to infer the class of a test input directly using the liquid neuronal spiking activity. This is in contrast to standard Liquid State Machines (LSMs) that have fixed synaptic connections between the input and liquid followed by a readout layer (trained in a supervised manner) to extract the liquid states and infer the class of the input patterns. Moreover, the utility of LSMs has primarily been demonstrated for speech recognition. We find that training such LSMs is challenging for complex pattern recognition tasks because of the information loss incurred by using fixed input to liquid synaptic connections. We show that our Liquid-SNN is capable of efficiently recognizing both speech and image patterns by learning the rich temporal information contained in the respective input patterns. However, the need to enlarge the liquid for improving the accuracy introduces scalability challenges and training inefficiencies. We propose SpiLinC that is composed of an ensemble of multiple liquids operating in parallel. We use a “divide and learn” strategy for SpiLinC, where each liquid is trained on a unique segment of the input patterns that causes the neurons to self-learn distinctive input features. SpiLinC effectively recognizes a test pattern by combining the spiking activity of the constituent liquids, each of which identifies characteristic input features. As a result, SpiLinC offers competitive classification accuracy compared to the Liquid-SNN with added sparsity in synaptic connectivity and faster training convergence, both of which lead to improved energy efficiency in neuromorphic hardware implementations. We validate the efficacy of the proposed Liquid-SNN and SpiLinC on the entire digit subset of the TI46 speech corpus and handwritten digits from the MNIST dataset.
format Online
Article
Text
id pubmed-6116788
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-61167882018-09-06 SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition Srinivasan, Gopalakrishnan Panda, Priyadarshini Roy, Kaushik Front Neurosci Neuroscience In this work, we propose a Spiking Neural Network (SNN) consisting of input neurons sparsely connected by plastic synapses to a randomly interlinked liquid, referred to as Liquid-SNN, for unsupervised speech and image recognition. We adapt the strength of the synapses interconnecting the input and liquid using Spike Timing Dependent Plasticity (STDP), which enables the neurons to self-learn a general representation of unique classes of input patterns. The presented unsupervised learning methodology makes it possible to infer the class of a test input directly using the liquid neuronal spiking activity. This is in contrast to standard Liquid State Machines (LSMs) that have fixed synaptic connections between the input and liquid followed by a readout layer (trained in a supervised manner) to extract the liquid states and infer the class of the input patterns. Moreover, the utility of LSMs has primarily been demonstrated for speech recognition. We find that training such LSMs is challenging for complex pattern recognition tasks because of the information loss incurred by using fixed input to liquid synaptic connections. We show that our Liquid-SNN is capable of efficiently recognizing both speech and image patterns by learning the rich temporal information contained in the respective input patterns. However, the need to enlarge the liquid for improving the accuracy introduces scalability challenges and training inefficiencies. We propose SpiLinC that is composed of an ensemble of multiple liquids operating in parallel. We use a “divide and learn” strategy for SpiLinC, where each liquid is trained on a unique segment of the input patterns that causes the neurons to self-learn distinctive input features. SpiLinC effectively recognizes a test pattern by combining the spiking activity of the constituent liquids, each of which identifies characteristic input features. As a result, SpiLinC offers competitive classification accuracy compared to the Liquid-SNN with added sparsity in synaptic connectivity and faster training convergence, both of which lead to improved energy efficiency in neuromorphic hardware implementations. We validate the efficacy of the proposed Liquid-SNN and SpiLinC on the entire digit subset of the TI46 speech corpus and handwritten digits from the MNIST dataset. Frontiers Media S.A. 2018-08-23 /pmc/articles/PMC6116788/ /pubmed/30190670 http://dx.doi.org/10.3389/fnins.2018.00524 Text en Copyright © 2018 Srinivasan, Panda and Roy. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Srinivasan, Gopalakrishnan
Panda, Priyadarshini
Roy, Kaushik
SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition
title SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition
title_full SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition
title_fullStr SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition
title_full_unstemmed SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition
title_short SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition
title_sort spilinc: spiking liquid-ensemble computing for unsupervised speech and image recognition
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6116788/
https://www.ncbi.nlm.nih.gov/pubmed/30190670
http://dx.doi.org/10.3389/fnins.2018.00524
work_keys_str_mv AT srinivasangopalakrishnan spilincspikingliquidensemblecomputingforunsupervisedspeechandimagerecognition
AT pandapriyadarshini spilincspikingliquidensemblecomputingforunsupervisedspeechandimagerecognition
AT roykaushik spilincspikingliquidensemblecomputingforunsupervisedspeechandimagerecognition