Cargando…

Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks

Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images captu...

Descripción completa

Detalles Bibliográficos
Autores principales: Jin, Weina, Li, Xiaoxiao, Fatehi, Mostafa, Hamarneh, Ghassan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9922805/
https://www.ncbi.nlm.nih.gov/pubmed/36793676
http://dx.doi.org/10.1016/j.mex.2023.102009
_version_ 1784887606708797440
author Jin, Weina
Li, Xiaoxiao
Fatehi, Mostafa
Hamarneh, Ghassan
author_facet Jin, Weina
Li, Xiaoxiao
Fatehi, Mostafa
Hamarneh, Ghassan
author_sort Jin, Weina
collection PubMed
description Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images capture different aspects of the same underlying regions of interest. Explaining DNN decisions on multi-modal medical images is thus a clinically important problem. Our methods adopt commonly-used post-hoc artificial intelligence feature attribution methods to explain DNN decisions on multi-modal medical images, including two categories of gradient- and perturbation-based methods. • Gradient-based explanation methods – such as Guided BackProp, DeepLift – utilize the gradient signal to estimate the feature importance for model prediction. • Perturbation-based methods – such as occlusion, LIME, kernel SHAP – utilize the input-output sampling pairs to estimate the feature importance. • We describe the implementation details on how to make the methods work for multi-modal image input, and make the implementation code available.
format Online
Article
Text
id pubmed-9922805
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-99228052023-02-14 Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks Jin, Weina Li, Xiaoxiao Fatehi, Mostafa Hamarneh, Ghassan MethodsX Computer Science Explaining model decisions from medical image inputs is necessary for deploying deep neural network (DNN) based models as clinical decision assistants. The acquisition of multi-modal medical images is pervasive in practice for supporting the clinical decision-making process. Multi-modal images capture different aspects of the same underlying regions of interest. Explaining DNN decisions on multi-modal medical images is thus a clinically important problem. Our methods adopt commonly-used post-hoc artificial intelligence feature attribution methods to explain DNN decisions on multi-modal medical images, including two categories of gradient- and perturbation-based methods. • Gradient-based explanation methods – such as Guided BackProp, DeepLift – utilize the gradient signal to estimate the feature importance for model prediction. • Perturbation-based methods – such as occlusion, LIME, kernel SHAP – utilize the input-output sampling pairs to estimate the feature importance. • We describe the implementation details on how to make the methods work for multi-modal image input, and make the implementation code available. Elsevier 2023-01-10 /pmc/articles/PMC9922805/ /pubmed/36793676 http://dx.doi.org/10.1016/j.mex.2023.102009 Text en © 2023 The Authors. Published by Elsevier B.V. https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Computer Science
Jin, Weina
Li, Xiaoxiao
Fatehi, Mostafa
Hamarneh, Ghassan
Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks
title Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks
title_full Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks
title_fullStr Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks
title_full_unstemmed Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks
title_short Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks
title_sort generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks
topic Computer Science
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9922805/
https://www.ncbi.nlm.nih.gov/pubmed/36793676
http://dx.doi.org/10.1016/j.mex.2023.102009
work_keys_str_mv AT jinweina generatingposthocexplanationfromdeepneuralnetworksformultimodalmedicalimageanalysistasks
AT lixiaoxiao generatingposthocexplanationfromdeepneuralnetworksformultimodalmedicalimageanalysistasks
AT fatehimostafa generatingposthocexplanationfromdeepneuralnetworksformultimodalmedicalimageanalysistasks
AT hamarnehghassan generatingposthocexplanationfromdeepneuralnetworksformultimodalmedicalimageanalysistasks