Cargando…

Visual explanations from spiking neural networks using inter-spike intervals

By emulating biological features in brain, Spiking Neural Networks (SNNs) offer an energy-efficient alternative to conventional deep learning. To make SNNs ubiquitous, a ‘visual explanation’ technique for analysing and explaining the internal spike behavior of such temporal deep SNNs is crucial. Exp...

Descripción completa

Detalles Bibliográficos
Autores principales: Kim, Youngeun, Panda, Priyadarshini
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8463578/
https://www.ncbi.nlm.nih.gov/pubmed/34561513
http://dx.doi.org/10.1038/s41598-021-98448-0
_version_ 1784572427033903104
author Kim, Youngeun
Panda, Priyadarshini
author_facet Kim, Youngeun
Panda, Priyadarshini
author_sort Kim, Youngeun
collection PubMed
description By emulating biological features in brain, Spiking Neural Networks (SNNs) offer an energy-efficient alternative to conventional deep learning. To make SNNs ubiquitous, a ‘visual explanation’ technique for analysing and explaining the internal spike behavior of such temporal deep SNNs is crucial. Explaining SNNs visually will make the network more transparent giving the end-user a tool to understand how SNNs make temporal predictions and why they make a certain decision. In this paper, we propose a bio-plausible visual explanation tool for SNNs, called Spike Activation Map (SAM). SAM yields a heatmap (i.e., localization map) corresponding to each time-step of input data by highlighting neurons with short inter-spike interval activity. Interestingly, without the use of gradients and ground truth, SAM produces a temporal localization map highlighting the region of interest in an image attributed to an SNN’s prediction at each time-step. Overall, SAM outsets the beginning of a new research area ‘explainable neuromorphic computing’ that will ultimately allow end-users to establish appropriate trust in predictions from SNNs.
format Online
Article
Text
id pubmed-8463578
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-84635782021-09-27 Visual explanations from spiking neural networks using inter-spike intervals Kim, Youngeun Panda, Priyadarshini Sci Rep Article By emulating biological features in brain, Spiking Neural Networks (SNNs) offer an energy-efficient alternative to conventional deep learning. To make SNNs ubiquitous, a ‘visual explanation’ technique for analysing and explaining the internal spike behavior of such temporal deep SNNs is crucial. Explaining SNNs visually will make the network more transparent giving the end-user a tool to understand how SNNs make temporal predictions and why they make a certain decision. In this paper, we propose a bio-plausible visual explanation tool for SNNs, called Spike Activation Map (SAM). SAM yields a heatmap (i.e., localization map) corresponding to each time-step of input data by highlighting neurons with short inter-spike interval activity. Interestingly, without the use of gradients and ground truth, SAM produces a temporal localization map highlighting the region of interest in an image attributed to an SNN’s prediction at each time-step. Overall, SAM outsets the beginning of a new research area ‘explainable neuromorphic computing’ that will ultimately allow end-users to establish appropriate trust in predictions from SNNs. Nature Publishing Group UK 2021-09-24 /pmc/articles/PMC8463578/ /pubmed/34561513 http://dx.doi.org/10.1038/s41598-021-98448-0 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Kim, Youngeun
Panda, Priyadarshini
Visual explanations from spiking neural networks using inter-spike intervals
title Visual explanations from spiking neural networks using inter-spike intervals
title_full Visual explanations from spiking neural networks using inter-spike intervals
title_fullStr Visual explanations from spiking neural networks using inter-spike intervals
title_full_unstemmed Visual explanations from spiking neural networks using inter-spike intervals
title_short Visual explanations from spiking neural networks using inter-spike intervals
title_sort visual explanations from spiking neural networks using inter-spike intervals
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8463578/
https://www.ncbi.nlm.nih.gov/pubmed/34561513
http://dx.doi.org/10.1038/s41598-021-98448-0
work_keys_str_mv AT kimyoungeun visualexplanationsfromspikingneuralnetworksusinginterspikeintervals
AT pandapriyadarshini visualexplanationsfromspikingneuralnetworksusinginterspikeintervals