Cargando…

From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks

Background: Explainable artificial intelligence (XAI) is a technology that can enhance trust in mental state classifications by providing explanations for the reasoning behind artificial intelligence (AI) models outputs, especially for high-dimensional and highly-correlated brain signals. Feature im...

Descripción completa

Detalles Bibliográficos
Autores principales: Alfeo, Antonio Luca, Zippo, Antonio G., Catrambone, Vincenzo, Cimino, Mario G.C.A., Toschi, Nicola, Valenza, Gaetano
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier Scientific Publishers 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10232646/
https://www.ncbi.nlm.nih.gov/pubmed/37086584
http://dx.doi.org/10.1016/j.cmpb.2023.107550
_version_ 1785052031015190528
author Alfeo, Antonio Luca
Zippo, Antonio G.
Catrambone, Vincenzo
Cimino, Mario G.C.A.
Toschi, Nicola
Valenza, Gaetano
author_facet Alfeo, Antonio Luca
Zippo, Antonio G.
Catrambone, Vincenzo
Cimino, Mario G.C.A.
Toschi, Nicola
Valenza, Gaetano
author_sort Alfeo, Antonio Luca
collection PubMed
description Background: Explainable artificial intelligence (XAI) is a technology that can enhance trust in mental state classifications by providing explanations for the reasoning behind artificial intelligence (AI) models outputs, especially for high-dimensional and highly-correlated brain signals. Feature importance and counterfactual explanations are two common approaches to generate these explanations, but both have drawbacks. While feature importance methods, such as shapley additive explanations (SHAP), can be computationally expensive and sensitive to feature correlation, counterfactual explanations only explain a single outcome instead of the entire model. Methods: To overcome these limitations, we propose a new procedure for computing global feature importance that involves aggregating local counterfactual explanations. This approach is specifically tailored to fMRI signals and is based on the hypothesis that instances close to the decision boundary and their counterfactuals mainly differ in the features identified as most important for the downstream classification task. We refer to this proposed feature importance measure as Boundary Crossing Solo Ratio (BoCSoR), since it quantifies the frequency with which a change in each feature in isolation leads to a change in classification outcome, i.e., the crossing of the model’s decision boundary. Results and Conclusions: Experimental results on synthetic data and real publicly available fMRI data from the Human Connect project show that the proposed BoCSoR measure is more robust to feature correlation and less computationally expensive than state-of-the-art methods. Additionally, it is equally effective in providing an explanation for the behavior of any AI model for brain signals. These properties are crucial for medical decision support systems, where many different features are often extracted from the same physiological measures and a gold standard is absent. Consequently, computing feature importance may become computationally expensive, and there may be a high probability of mutual correlation among features, leading to unreliable results from state-of-the-art XAI methods.
format Online
Article
Text
id pubmed-10232646
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Elsevier Scientific Publishers
record_format MEDLINE/PubMed
spelling pubmed-102326462023-06-02 From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks Alfeo, Antonio Luca Zippo, Antonio G. Catrambone, Vincenzo Cimino, Mario G.C.A. Toschi, Nicola Valenza, Gaetano Comput Methods Programs Biomed Article Background: Explainable artificial intelligence (XAI) is a technology that can enhance trust in mental state classifications by providing explanations for the reasoning behind artificial intelligence (AI) models outputs, especially for high-dimensional and highly-correlated brain signals. Feature importance and counterfactual explanations are two common approaches to generate these explanations, but both have drawbacks. While feature importance methods, such as shapley additive explanations (SHAP), can be computationally expensive and sensitive to feature correlation, counterfactual explanations only explain a single outcome instead of the entire model. Methods: To overcome these limitations, we propose a new procedure for computing global feature importance that involves aggregating local counterfactual explanations. This approach is specifically tailored to fMRI signals and is based on the hypothesis that instances close to the decision boundary and their counterfactuals mainly differ in the features identified as most important for the downstream classification task. We refer to this proposed feature importance measure as Boundary Crossing Solo Ratio (BoCSoR), since it quantifies the frequency with which a change in each feature in isolation leads to a change in classification outcome, i.e., the crossing of the model’s decision boundary. Results and Conclusions: Experimental results on synthetic data and real publicly available fMRI data from the Human Connect project show that the proposed BoCSoR measure is more robust to feature correlation and less computationally expensive than state-of-the-art methods. Additionally, it is equally effective in providing an explanation for the behavior of any AI model for brain signals. These properties are crucial for medical decision support systems, where many different features are often extracted from the same physiological measures and a gold standard is absent. Consequently, computing feature importance may become computationally expensive, and there may be a high probability of mutual correlation among features, leading to unreliable results from state-of-the-art XAI methods. Elsevier Scientific Publishers 2023-06 /pmc/articles/PMC10232646/ /pubmed/37086584 http://dx.doi.org/10.1016/j.cmpb.2023.107550 Text en © 2023 The Author(s) https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Article
Alfeo, Antonio Luca
Zippo, Antonio G.
Catrambone, Vincenzo
Cimino, Mario G.C.A.
Toschi, Nicola
Valenza, Gaetano
From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks
title From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks
title_full From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks
title_fullStr From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks
title_full_unstemmed From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks
title_short From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks
title_sort from local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10232646/
https://www.ncbi.nlm.nih.gov/pubmed/37086584
http://dx.doi.org/10.1016/j.cmpb.2023.107550
work_keys_str_mv AT alfeoantonioluca fromlocalcounterfactualstoglobalfeatureimportanceefficientrobustandmodelagnosticexplanationsforbrainconnectivitynetworks
AT zippoantoniog fromlocalcounterfactualstoglobalfeatureimportanceefficientrobustandmodelagnosticexplanationsforbrainconnectivitynetworks
AT catrambonevincenzo fromlocalcounterfactualstoglobalfeatureimportanceefficientrobustandmodelagnosticexplanationsforbrainconnectivitynetworks
AT ciminomariogca fromlocalcounterfactualstoglobalfeatureimportanceefficientrobustandmodelagnosticexplanationsforbrainconnectivitynetworks
AT toschinicola fromlocalcounterfactualstoglobalfeatureimportanceefficientrobustandmodelagnosticexplanationsforbrainconnectivitynetworks
AT valenzagaetano fromlocalcounterfactualstoglobalfeatureimportanceefficientrobustandmodelagnosticexplanationsforbrainconnectivitynetworks