Cargando…

A Spiking Neural Network Framework for Robust Sound Classification

Environmental sounds form part of our daily life. With the advancement of deep learning models and the abundance of training data, the performance of automatic sound classification (ASC) systems has improved significantly in recent years. However, the high computational cost, hence high power consum...

Descripción completa

Detalles Bibliográficos
Autores principales: Wu, Jibin, Chua, Yansong, Zhang, Malu, Li, Haizhou, Tan, Kay Chen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6252336/
https://www.ncbi.nlm.nih.gov/pubmed/30510500
http://dx.doi.org/10.3389/fnins.2018.00836
_version_ 1783373239574069248
author Wu, Jibin
Chua, Yansong
Zhang, Malu
Li, Haizhou
Tan, Kay Chen
author_facet Wu, Jibin
Chua, Yansong
Zhang, Malu
Li, Haizhou
Tan, Kay Chen
author_sort Wu, Jibin
collection PubMed
description Environmental sounds form part of our daily life. With the advancement of deep learning models and the abundance of training data, the performance of automatic sound classification (ASC) systems has improved significantly in recent years. However, the high computational cost, hence high power consumption, remains a major hurdle for large-scale implementation of ASC systems on mobile and wearable devices. Motivated by the observations that humans are highly effective and consume little power whilst analyzing complex audio scenes, we propose a biologically plausible ASC framework, namely SOM-SNN. This framework uses the unsupervised self-organizing map (SOM) for representing frequency contents embedded within the acoustic signals, followed by an event-based spiking neural network (SNN) for spatiotemporal spiking pattern classification. We report experimental results on the RWCP environmental sound and TIDIGITS spoken digits datasets, which demonstrate competitive classification accuracies over other deep learning and SNN-based models. The SOM-SNN framework is also shown to be highly robust to corrupting noise after multi-condition training, whereby the model is trained with noise-corrupted sound samples. Moreover, we discover the early decision making capability of the proposed framework: an accurate classification can be made with an only partial presentation of the input.
format Online
Article
Text
id pubmed-6252336
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-62523362018-12-03 A Spiking Neural Network Framework for Robust Sound Classification Wu, Jibin Chua, Yansong Zhang, Malu Li, Haizhou Tan, Kay Chen Front Neurosci Neuroscience Environmental sounds form part of our daily life. With the advancement of deep learning models and the abundance of training data, the performance of automatic sound classification (ASC) systems has improved significantly in recent years. However, the high computational cost, hence high power consumption, remains a major hurdle for large-scale implementation of ASC systems on mobile and wearable devices. Motivated by the observations that humans are highly effective and consume little power whilst analyzing complex audio scenes, we propose a biologically plausible ASC framework, namely SOM-SNN. This framework uses the unsupervised self-organizing map (SOM) for representing frequency contents embedded within the acoustic signals, followed by an event-based spiking neural network (SNN) for spatiotemporal spiking pattern classification. We report experimental results on the RWCP environmental sound and TIDIGITS spoken digits datasets, which demonstrate competitive classification accuracies over other deep learning and SNN-based models. The SOM-SNN framework is also shown to be highly robust to corrupting noise after multi-condition training, whereby the model is trained with noise-corrupted sound samples. Moreover, we discover the early decision making capability of the proposed framework: an accurate classification can be made with an only partial presentation of the input. Frontiers Media S.A. 2018-11-19 /pmc/articles/PMC6252336/ /pubmed/30510500 http://dx.doi.org/10.3389/fnins.2018.00836 Text en Copyright © 2018 Wu, Chua, Zhang, Li and Tan. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Wu, Jibin
Chua, Yansong
Zhang, Malu
Li, Haizhou
Tan, Kay Chen
A Spiking Neural Network Framework for Robust Sound Classification
title A Spiking Neural Network Framework for Robust Sound Classification
title_full A Spiking Neural Network Framework for Robust Sound Classification
title_fullStr A Spiking Neural Network Framework for Robust Sound Classification
title_full_unstemmed A Spiking Neural Network Framework for Robust Sound Classification
title_short A Spiking Neural Network Framework for Robust Sound Classification
title_sort spiking neural network framework for robust sound classification
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6252336/
https://www.ncbi.nlm.nih.gov/pubmed/30510500
http://dx.doi.org/10.3389/fnins.2018.00836
work_keys_str_mv AT wujibin aspikingneuralnetworkframeworkforrobustsoundclassification
AT chuayansong aspikingneuralnetworkframeworkforrobustsoundclassification
AT zhangmalu aspikingneuralnetworkframeworkforrobustsoundclassification
AT lihaizhou aspikingneuralnetworkframeworkforrobustsoundclassification
AT tankaychen aspikingneuralnetworkframeworkforrobustsoundclassification
AT wujibin spikingneuralnetworkframeworkforrobustsoundclassification
AT chuayansong spikingneuralnetworkframeworkforrobustsoundclassification
AT zhangmalu spikingneuralnetworkframeworkforrobustsoundclassification
AT lihaizhou spikingneuralnetworkframeworkforrobustsoundclassification
AT tankaychen spikingneuralnetworkframeworkforrobustsoundclassification