Cargando…

Gradient-based feature-attribution explainability methods for spiking neural networks

INTRODUCTION: Spiking neural networks (SNNs) are a model of computation that mimics the behavior of biological neurons. SNNs process event data (spikes) and operate more sparsely than artificial neural networks (ANNs), resulting in ultra-low latency and small power consumption. This paper aims to ad...

Descripción completa

Detalles Bibliográficos
Autores principales: Bitar, Ammar, Rosales, Rafael, Paulitsch, Michael
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10565802/
https://www.ncbi.nlm.nih.gov/pubmed/37829721
http://dx.doi.org/10.3389/fnins.2023.1153999
_version_ 1785118774294216704
author Bitar, Ammar
Rosales, Rafael
Paulitsch, Michael
author_facet Bitar, Ammar
Rosales, Rafael
Paulitsch, Michael
author_sort Bitar, Ammar
collection PubMed
description INTRODUCTION: Spiking neural networks (SNNs) are a model of computation that mimics the behavior of biological neurons. SNNs process event data (spikes) and operate more sparsely than artificial neural networks (ANNs), resulting in ultra-low latency and small power consumption. This paper aims to adapt and evaluate gradient-based explainability methods for SNNs, which were originally developed for conventional ANNs. METHODS: The adapted methods aim to create input feature attribution maps for SNNs trained through backpropagation that process either event-based spiking data or real-valued data. The methods address the limitations of existing work on explainability methods for SNNs, such as poor scalability, limited to convolutional layers, requiring the training of another model, and providing maps of activation values instead of true attribution scores. The adapted methods are evaluated on classification tasks for both real-valued and spiking data, and the accuracy of the proposed methods is confirmed through perturbation experiments at the pixel and spike levels. RESULTS AND DISCUSSION: The results reveal that gradient-based SNN attribution methods successfully identify highly contributing pixels and spikes with significantly less computation time than model-agnostic methods. Additionally, we observe that the chosen coding technique has a noticeable effect on the input features that will be most significant. These findings demonstrate the potential of gradient-based explainability methods for SNNs in improving our understanding of how these networks process information and contribute to the development of more efficient and accurate SNNs.
format Online
Article
Text
id pubmed-10565802
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-105658022023-10-12 Gradient-based feature-attribution explainability methods for spiking neural networks Bitar, Ammar Rosales, Rafael Paulitsch, Michael Front Neurosci Neuroscience INTRODUCTION: Spiking neural networks (SNNs) are a model of computation that mimics the behavior of biological neurons. SNNs process event data (spikes) and operate more sparsely than artificial neural networks (ANNs), resulting in ultra-low latency and small power consumption. This paper aims to adapt and evaluate gradient-based explainability methods for SNNs, which were originally developed for conventional ANNs. METHODS: The adapted methods aim to create input feature attribution maps for SNNs trained through backpropagation that process either event-based spiking data or real-valued data. The methods address the limitations of existing work on explainability methods for SNNs, such as poor scalability, limited to convolutional layers, requiring the training of another model, and providing maps of activation values instead of true attribution scores. The adapted methods are evaluated on classification tasks for both real-valued and spiking data, and the accuracy of the proposed methods is confirmed through perturbation experiments at the pixel and spike levels. RESULTS AND DISCUSSION: The results reveal that gradient-based SNN attribution methods successfully identify highly contributing pixels and spikes with significantly less computation time than model-agnostic methods. Additionally, we observe that the chosen coding technique has a noticeable effect on the input features that will be most significant. These findings demonstrate the potential of gradient-based explainability methods for SNNs in improving our understanding of how these networks process information and contribute to the development of more efficient and accurate SNNs. Frontiers Media S.A. 2023-09-27 /pmc/articles/PMC10565802/ /pubmed/37829721 http://dx.doi.org/10.3389/fnins.2023.1153999 Text en Copyright © 2023 Bitar, Rosales and Paulitsch. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Bitar, Ammar
Rosales, Rafael
Paulitsch, Michael
Gradient-based feature-attribution explainability methods for spiking neural networks
title Gradient-based feature-attribution explainability methods for spiking neural networks
title_full Gradient-based feature-attribution explainability methods for spiking neural networks
title_fullStr Gradient-based feature-attribution explainability methods for spiking neural networks
title_full_unstemmed Gradient-based feature-attribution explainability methods for spiking neural networks
title_short Gradient-based feature-attribution explainability methods for spiking neural networks
title_sort gradient-based feature-attribution explainability methods for spiking neural networks
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10565802/
https://www.ncbi.nlm.nih.gov/pubmed/37829721
http://dx.doi.org/10.3389/fnins.2023.1153999
work_keys_str_mv AT bitarammar gradientbasedfeatureattributionexplainabilitymethodsforspikingneuralnetworks
AT rosalesrafael gradientbasedfeatureattributionexplainabilitymethodsforspikingneuralnetworks
AT paulitschmichael gradientbasedfeatureattributionexplainabilitymethodsforspikingneuralnetworks