Cargando…

Explainability of deep neural networks for MRI analysis of brain tumors

PURPOSE: Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practi...

Descripción completa

Detalles Bibliográficos
Autores principales: Zeineldin, Ramy A., Karar, Mohamed E., Elshaer, Ziad, Coburger, ·Jan, Wirtz, Christian R., Burgert, Oliver, Mathis-Ullrich, Franziska
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9463287/
https://www.ncbi.nlm.nih.gov/pubmed/35460019
http://dx.doi.org/10.1007/s11548-022-02619-x
_version_ 1784787364906795008
author Zeineldin, Ramy A.
Karar, Mohamed E.
Elshaer, Ziad
Coburger, ·Jan
Wirtz, Christian R.
Burgert, Oliver
Mathis-Ullrich, Franziska
author_facet Zeineldin, Ramy A.
Karar, Mohamed E.
Elshaer, Ziad
Coburger, ·Jan
Wirtz, Christian R.
Burgert, Oliver
Mathis-Ullrich, Franziska
author_sort Zeineldin, Ramy A.
collection PubMed
description PURPOSE: Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice. METHODS: In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent. RESULTS: NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN. CONCLUSION: Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI.
format Online
Article
Text
id pubmed-9463287
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-94632872022-09-11 Explainability of deep neural networks for MRI analysis of brain tumors Zeineldin, Ramy A. Karar, Mohamed E. Elshaer, Ziad Coburger, ·Jan Wirtz, Christian R. Burgert, Oliver Mathis-Ullrich, Franziska Int J Comput Assist Radiol Surg Original Article PURPOSE: Artificial intelligence (AI), in particular deep neural networks, has achieved remarkable results for medical image analysis in several applications. Yet the lack of explainability of deep neural models is considered the principal restriction before applying these methods in clinical practice. METHODS: In this study, we propose a NeuroXAI framework for explainable AI of deep learning networks to increase the trust of medical experts. NeuroXAI implements seven state-of-the-art explanation methods providing visualization maps to help make deep learning models transparent. RESULTS: NeuroXAI has been applied to two applications of the most widely investigated problems in brain imaging analysis, i.e., image classification and segmentation using magnetic resonance (MR) modality. Visual attention maps of multiple XAI methods have been generated and compared for both applications. Another experiment demonstrated that NeuroXAI can provide information flow visualization on internal layers of a segmentation CNN. CONCLUSION: Due to its open architecture, ease of implementation, and scalability to new XAI methods, NeuroXAI could be utilized to assist radiologists and medical professionals in the detection and diagnosis of brain tumors in the clinical routine of cancer patients. The code of NeuroXAI is publicly accessible at https://github.com/razeineldin/NeuroXAI. Springer International Publishing 2022-04-23 2022 /pmc/articles/PMC9463287/ /pubmed/35460019 http://dx.doi.org/10.1007/s11548-022-02619-x Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visithttp://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Article
Zeineldin, Ramy A.
Karar, Mohamed E.
Elshaer, Ziad
Coburger, ·Jan
Wirtz, Christian R.
Burgert, Oliver
Mathis-Ullrich, Franziska
Explainability of deep neural networks for MRI analysis of brain tumors
title Explainability of deep neural networks for MRI analysis of brain tumors
title_full Explainability of deep neural networks for MRI analysis of brain tumors
title_fullStr Explainability of deep neural networks for MRI analysis of brain tumors
title_full_unstemmed Explainability of deep neural networks for MRI analysis of brain tumors
title_short Explainability of deep neural networks for MRI analysis of brain tumors
title_sort explainability of deep neural networks for mri analysis of brain tumors
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9463287/
https://www.ncbi.nlm.nih.gov/pubmed/35460019
http://dx.doi.org/10.1007/s11548-022-02619-x
work_keys_str_mv AT zeineldinramya explainabilityofdeepneuralnetworksformrianalysisofbraintumors
AT kararmohamede explainabilityofdeepneuralnetworksformrianalysisofbraintumors
AT elshaerziad explainabilityofdeepneuralnetworksformrianalysisofbraintumors
AT coburgerjan explainabilityofdeepneuralnetworksformrianalysisofbraintumors
AT wirtzchristianr explainabilityofdeepneuralnetworksformrianalysisofbraintumors
AT burgertoliver explainabilityofdeepneuralnetworksformrianalysisofbraintumors
AT mathisullrichfranziska explainabilityofdeepneuralnetworksformrianalysisofbraintumors