Cargando…
An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks
The auditory front-end is an integral part of a spiking neural network (SNN) when performing auditory cognitive tasks. It encodes the temporal dynamic stimulus, such as speech and audio, into an efficient, effective and reconstructable spike pattern to facilitate the subsequent processing. However,...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6987407/ https://www.ncbi.nlm.nih.gov/pubmed/32038132 http://dx.doi.org/10.3389/fnins.2019.01420 |
_version_ | 1783492133804572672 |
---|---|
author | Pan, Zihan Chua, Yansong Wu, Jibin Zhang, Malu Li, Haizhou Ambikairajah, Eliathamby |
author_facet | Pan, Zihan Chua, Yansong Wu, Jibin Zhang, Malu Li, Haizhou Ambikairajah, Eliathamby |
author_sort | Pan, Zihan |
collection | PubMed |
description | The auditory front-end is an integral part of a spiking neural network (SNN) when performing auditory cognitive tasks. It encodes the temporal dynamic stimulus, such as speech and audio, into an efficient, effective and reconstructable spike pattern to facilitate the subsequent processing. However, most of the auditory front-ends in current studies have not made use of recent findings in psychoacoustics and physiology concerning human listening. In this paper, we propose a neural encoding and decoding scheme that is optimized for audio processing. The neural encoding scheme, that we call Biologically plausible Auditory Encoding (BAE), emulates the functions of the perceptual components of the human auditory system, that include the cochlear filter bank, the inner hair cells, auditory masking effects from psychoacoustic models, and the spike neural encoding by the auditory nerve. We evaluate the perceptual quality of the BAE scheme using PESQ; the performance of the BAE based on sound classification and speech recognition experiments. Finally, we also built and published two spike-version of speech datasets: the Spike-TIDIGITS and the Spike-TIMIT, for researchers to use and benchmarking of future SNN research. |
format | Online Article Text |
id | pubmed-6987407 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-69874072020-02-07 An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks Pan, Zihan Chua, Yansong Wu, Jibin Zhang, Malu Li, Haizhou Ambikairajah, Eliathamby Front Neurosci Neuroscience The auditory front-end is an integral part of a spiking neural network (SNN) when performing auditory cognitive tasks. It encodes the temporal dynamic stimulus, such as speech and audio, into an efficient, effective and reconstructable spike pattern to facilitate the subsequent processing. However, most of the auditory front-ends in current studies have not made use of recent findings in psychoacoustics and physiology concerning human listening. In this paper, we propose a neural encoding and decoding scheme that is optimized for audio processing. The neural encoding scheme, that we call Biologically plausible Auditory Encoding (BAE), emulates the functions of the perceptual components of the human auditory system, that include the cochlear filter bank, the inner hair cells, auditory masking effects from psychoacoustic models, and the spike neural encoding by the auditory nerve. We evaluate the perceptual quality of the BAE scheme using PESQ; the performance of the BAE based on sound classification and speech recognition experiments. Finally, we also built and published two spike-version of speech datasets: the Spike-TIDIGITS and the Spike-TIMIT, for researchers to use and benchmarking of future SNN research. Frontiers Media S.A. 2020-01-22 /pmc/articles/PMC6987407/ /pubmed/32038132 http://dx.doi.org/10.3389/fnins.2019.01420 Text en Copyright © 2020 Pan, Chua, Wu, Zhang, Li and Ambikairajah. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Pan, Zihan Chua, Yansong Wu, Jibin Zhang, Malu Li, Haizhou Ambikairajah, Eliathamby An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks |
title | An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks |
title_full | An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks |
title_fullStr | An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks |
title_full_unstemmed | An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks |
title_short | An Efficient and Perceptually Motivated Auditory Neural Encoding and Decoding Algorithm for Spiking Neural Networks |
title_sort | efficient and perceptually motivated auditory neural encoding and decoding algorithm for spiking neural networks |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6987407/ https://www.ncbi.nlm.nih.gov/pubmed/32038132 http://dx.doi.org/10.3389/fnins.2019.01420 |
work_keys_str_mv | AT panzihan anefficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks AT chuayansong anefficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks AT wujibin anefficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks AT zhangmalu anefficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks AT lihaizhou anefficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks AT ambikairajaheliathamby anefficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks AT panzihan efficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks AT chuayansong efficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks AT wujibin efficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks AT zhangmalu efficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks AT lihaizhou efficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks AT ambikairajaheliathamby efficientandperceptuallymotivatedauditoryneuralencodinganddecodingalgorithmforspikingneuralnetworks |