Cargando…

Explainable artificial intelligence model to predict brain states from fNIRS signals

Objective: Most Deep Learning (DL) methods for the classification of functional Near-Infrared Spectroscopy (fNIRS) signals do so without explaining which features contribute to the classification of a task or imagery. An explainable artificial intelligence (xAI) system that can decompose the Deep Le...

Descripción completa

Detalles Bibliográficos
Autores principales: Shibu, Caleb Jones, Sreedharan, Sujesh, Arun, KM, Kesavadas, Chandrasekharan, Sitaram, Ranganatha
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9892761/
https://www.ncbi.nlm.nih.gov/pubmed/36741783
http://dx.doi.org/10.3389/fnhum.2022.1029784
_version_ 1784881385008267264
author Shibu, Caleb Jones
Sreedharan, Sujesh
Arun, KM
Kesavadas, Chandrasekharan
Sitaram, Ranganatha
author_facet Shibu, Caleb Jones
Sreedharan, Sujesh
Arun, KM
Kesavadas, Chandrasekharan
Sitaram, Ranganatha
author_sort Shibu, Caleb Jones
collection PubMed
description Objective: Most Deep Learning (DL) methods for the classification of functional Near-Infrared Spectroscopy (fNIRS) signals do so without explaining which features contribute to the classification of a task or imagery. An explainable artificial intelligence (xAI) system that can decompose the Deep Learning mode’s output onto the input variables for fNIRS signals is described here. Approach: We propose an xAI-fNIRS system that consists of a classification module and an explanation module. The classification module consists of two separately trained sliding window-based classifiers, namely, (i) 1-D Convolutional Neural Network (CNN); and (ii) Long Short-Term Memory (LSTM). The explanation module uses SHAP (SHapley Additive exPlanations) to explain the CNN model’s output in terms of the model’s input. Main results: We observed that the classification module was able to classify two types of datasets: (a) Motor task (MT), acquired from three subjects; and (b) Motor imagery (MI), acquired from 29 subjects, with an accuracy of over 96% for both CNN and LSTM models. The explanation module was able to identify the channels contributing the most to the classification of MI or MT and therefore identify the channel locations and whether they correspond to oxy- or deoxy-hemoglobin levels in those locations. Significance: The xAI-fNIRS system can distinguish between the brain states related to overt and covert motor imagery from fNIRS signals with high classification accuracy and is able to explain the signal features that discriminate between the brain states of interest.
format Online
Article
Text
id pubmed-9892761
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-98927612023-02-03 Explainable artificial intelligence model to predict brain states from fNIRS signals Shibu, Caleb Jones Sreedharan, Sujesh Arun, KM Kesavadas, Chandrasekharan Sitaram, Ranganatha Front Hum Neurosci Human Neuroscience Objective: Most Deep Learning (DL) methods for the classification of functional Near-Infrared Spectroscopy (fNIRS) signals do so without explaining which features contribute to the classification of a task or imagery. An explainable artificial intelligence (xAI) system that can decompose the Deep Learning mode’s output onto the input variables for fNIRS signals is described here. Approach: We propose an xAI-fNIRS system that consists of a classification module and an explanation module. The classification module consists of two separately trained sliding window-based classifiers, namely, (i) 1-D Convolutional Neural Network (CNN); and (ii) Long Short-Term Memory (LSTM). The explanation module uses SHAP (SHapley Additive exPlanations) to explain the CNN model’s output in terms of the model’s input. Main results: We observed that the classification module was able to classify two types of datasets: (a) Motor task (MT), acquired from three subjects; and (b) Motor imagery (MI), acquired from 29 subjects, with an accuracy of over 96% for both CNN and LSTM models. The explanation module was able to identify the channels contributing the most to the classification of MI or MT and therefore identify the channel locations and whether they correspond to oxy- or deoxy-hemoglobin levels in those locations. Significance: The xAI-fNIRS system can distinguish between the brain states related to overt and covert motor imagery from fNIRS signals with high classification accuracy and is able to explain the signal features that discriminate between the brain states of interest. Frontiers Media S.A. 2023-01-19 /pmc/articles/PMC9892761/ /pubmed/36741783 http://dx.doi.org/10.3389/fnhum.2022.1029784 Text en Copyright © 2022 Shibu, Sreedharan, Arun, Kesavadas and Sitaram. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Human Neuroscience
Shibu, Caleb Jones
Sreedharan, Sujesh
Arun, KM
Kesavadas, Chandrasekharan
Sitaram, Ranganatha
Explainable artificial intelligence model to predict brain states from fNIRS signals
title Explainable artificial intelligence model to predict brain states from fNIRS signals
title_full Explainable artificial intelligence model to predict brain states from fNIRS signals
title_fullStr Explainable artificial intelligence model to predict brain states from fNIRS signals
title_full_unstemmed Explainable artificial intelligence model to predict brain states from fNIRS signals
title_short Explainable artificial intelligence model to predict brain states from fNIRS signals
title_sort explainable artificial intelligence model to predict brain states from fnirs signals
topic Human Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9892761/
https://www.ncbi.nlm.nih.gov/pubmed/36741783
http://dx.doi.org/10.3389/fnhum.2022.1029784
work_keys_str_mv AT shibucalebjones explainableartificialintelligencemodeltopredictbrainstatesfromfnirssignals
AT sreedharansujesh explainableartificialintelligencemodeltopredictbrainstatesfromfnirssignals
AT arunkm explainableartificialintelligencemodeltopredictbrainstatesfromfnirssignals
AT kesavadaschandrasekharan explainableartificialintelligencemodeltopredictbrainstatesfromfnirssignals
AT sitaramranganatha explainableartificialintelligencemodeltopredictbrainstatesfromfnirssignals