Cargando…

Explainable AI: A review of applications to neuroimaging data

Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diag...

Descripción completa

Detalles Bibliográficos
Autores principales: Farahani, Farzad V., Fiok, Krzysztof, Lahijanian, Behshad, Karwowski, Waldemar, Douglas, Pamela K.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9793854/
https://www.ncbi.nlm.nih.gov/pubmed/36583102
http://dx.doi.org/10.3389/fnins.2022.906290
_version_ 1784859920686907392
author Farahani, Farzad V.
Fiok, Krzysztof
Lahijanian, Behshad
Karwowski, Waldemar
Douglas, Pamela K.
author_facet Farahani, Farzad V.
Fiok, Krzysztof
Lahijanian, Behshad
Karwowski, Waldemar
Douglas, Pamela K.
author_sort Farahani, Farzad V.
collection PubMed
description Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the “black box” and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a post-hoc capacity. We then focus on reviewing recent applications of post-hoc relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls.
format Online
Article
Text
id pubmed-9793854
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-97938542022-12-28 Explainable AI: A review of applications to neuroimaging data Farahani, Farzad V. Fiok, Krzysztof Lahijanian, Behshad Karwowski, Waldemar Douglas, Pamela K. Front Neurosci Neuroscience Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the “black box” and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a post-hoc capacity. We then focus on reviewing recent applications of post-hoc relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls. Frontiers Media S.A. 2022-12-01 /pmc/articles/PMC9793854/ /pubmed/36583102 http://dx.doi.org/10.3389/fnins.2022.906290 Text en Copyright © 2022 Farahani, Fiok, Lahijanian, Karwowski and Douglas. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Farahani, Farzad V.
Fiok, Krzysztof
Lahijanian, Behshad
Karwowski, Waldemar
Douglas, Pamela K.
Explainable AI: A review of applications to neuroimaging data
title Explainable AI: A review of applications to neuroimaging data
title_full Explainable AI: A review of applications to neuroimaging data
title_fullStr Explainable AI: A review of applications to neuroimaging data
title_full_unstemmed Explainable AI: A review of applications to neuroimaging data
title_short Explainable AI: A review of applications to neuroimaging data
title_sort explainable ai: a review of applications to neuroimaging data
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9793854/
https://www.ncbi.nlm.nih.gov/pubmed/36583102
http://dx.doi.org/10.3389/fnins.2022.906290
work_keys_str_mv AT farahanifarzadv explainableaiareviewofapplicationstoneuroimagingdata
AT fiokkrzysztof explainableaiareviewofapplicationstoneuroimagingdata
AT lahijanianbehshad explainableaiareviewofapplicationstoneuroimagingdata
AT karwowskiwaldemar explainableaiareviewofapplicationstoneuroimagingdata
AT douglaspamelak explainableaiareviewofapplicationstoneuroimagingdata