Cargando…

An empirical comparison of deep learning explainability approaches for EEG using simulated ground truth

Recent advancements in machine learning and deep learning (DL) based neural decoders have significantly improved decoding capabilities using scalp electroencephalography (EEG). However, the interpretability of DL models remains an under-explored area. In this study, we compared multiple model explan...

Descripción completa

Detalles Bibliográficos
Autores principales: Sujatha Ravindran, Akshay, Contreras-Vidal, Jose
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10584975/
https://www.ncbi.nlm.nih.gov/pubmed/37853010
http://dx.doi.org/10.1038/s41598-023-43871-8
_version_ 1785122854859177984
author Sujatha Ravindran, Akshay
Contreras-Vidal, Jose
author_facet Sujatha Ravindran, Akshay
Contreras-Vidal, Jose
author_sort Sujatha Ravindran, Akshay
collection PubMed
description Recent advancements in machine learning and deep learning (DL) based neural decoders have significantly improved decoding capabilities using scalp electroencephalography (EEG). However, the interpretability of DL models remains an under-explored area. In this study, we compared multiple model explanation methods to identify the most suitable method for EEG and understand when some of these approaches might fail. A simulation framework was developed to evaluate the robustness and sensitivity of twelve back-propagation-based visualization methods by comparing to ground truth features. Multiple methods tested here showed reliability issues after randomizing either model weights or labels: e.g., the saliency approach, which is the most used visualization technique in EEG, was not class or model-specific. We found that DeepLift was consistently accurate as well as robust to detect the three key attributes tested here (temporal, spatial, and spectral precision). Overall, this study provides a review of model explanation methods for DL-based neural decoders and recommendations to understand when some of these methods fail and what they can capture in EEG.
format Online
Article
Text
id pubmed-10584975
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-105849752023-10-20 An empirical comparison of deep learning explainability approaches for EEG using simulated ground truth Sujatha Ravindran, Akshay Contreras-Vidal, Jose Sci Rep Article Recent advancements in machine learning and deep learning (DL) based neural decoders have significantly improved decoding capabilities using scalp electroencephalography (EEG). However, the interpretability of DL models remains an under-explored area. In this study, we compared multiple model explanation methods to identify the most suitable method for EEG and understand when some of these approaches might fail. A simulation framework was developed to evaluate the robustness and sensitivity of twelve back-propagation-based visualization methods by comparing to ground truth features. Multiple methods tested here showed reliability issues after randomizing either model weights or labels: e.g., the saliency approach, which is the most used visualization technique in EEG, was not class or model-specific. We found that DeepLift was consistently accurate as well as robust to detect the three key attributes tested here (temporal, spatial, and spectral precision). Overall, this study provides a review of model explanation methods for DL-based neural decoders and recommendations to understand when some of these methods fail and what they can capture in EEG. Nature Publishing Group UK 2023-10-18 /pmc/articles/PMC10584975/ /pubmed/37853010 http://dx.doi.org/10.1038/s41598-023-43871-8 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Sujatha Ravindran, Akshay
Contreras-Vidal, Jose
An empirical comparison of deep learning explainability approaches for EEG using simulated ground truth
title An empirical comparison of deep learning explainability approaches for EEG using simulated ground truth
title_full An empirical comparison of deep learning explainability approaches for EEG using simulated ground truth
title_fullStr An empirical comparison of deep learning explainability approaches for EEG using simulated ground truth
title_full_unstemmed An empirical comparison of deep learning explainability approaches for EEG using simulated ground truth
title_short An empirical comparison of deep learning explainability approaches for EEG using simulated ground truth
title_sort empirical comparison of deep learning explainability approaches for eeg using simulated ground truth
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10584975/
https://www.ncbi.nlm.nih.gov/pubmed/37853010
http://dx.doi.org/10.1038/s41598-023-43871-8
work_keys_str_mv AT sujatharavindranakshay anempiricalcomparisonofdeeplearningexplainabilityapproachesforeegusingsimulatedgroundtruth
AT contrerasvidaljose anempiricalcomparisonofdeeplearningexplainabilityapproachesforeegusingsimulatedgroundtruth
AT sujatharavindranakshay empiricalcomparisonofdeeplearningexplainabilityapproachesforeegusingsimulatedgroundtruth
AT contrerasvidaljose empiricalcomparisonofdeeplearningexplainabilityapproachesforeegusingsimulatedgroundtruth