Cargando…

Survey of Explainable AI Techniques in Healthcare

Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable...

Descripción completa

Detalles Bibliográficos
Autores principales: Chaddad, Ahmad, Peng, Jihao, Xu, Jian, Bouridane, Ahmed
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9862413/
https://www.ncbi.nlm.nih.gov/pubmed/36679430
http://dx.doi.org/10.3390/s23020634
_version_ 1784875086311849984
author Chaddad, Ahmad
Peng, Jihao
Xu, Jian
Bouridane, Ahmed
author_facet Chaddad, Ahmad
Peng, Jihao
Xu, Jian
Bouridane, Ahmed
author_sort Chaddad, Ahmad
collection PubMed
description Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
format Online
Article
Text
id pubmed-9862413
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-98624132023-01-22 Survey of Explainable AI Techniques in Healthcare Chaddad, Ahmad Peng, Jihao Xu, Jian Bouridane, Ahmed Sensors (Basel) Review Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient’s symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging. MDPI 2023-01-05 /pmc/articles/PMC9862413/ /pubmed/36679430 http://dx.doi.org/10.3390/s23020634 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Review
Chaddad, Ahmad
Peng, Jihao
Xu, Jian
Bouridane, Ahmed
Survey of Explainable AI Techniques in Healthcare
title Survey of Explainable AI Techniques in Healthcare
title_full Survey of Explainable AI Techniques in Healthcare
title_fullStr Survey of Explainable AI Techniques in Healthcare
title_full_unstemmed Survey of Explainable AI Techniques in Healthcare
title_short Survey of Explainable AI Techniques in Healthcare
title_sort survey of explainable ai techniques in healthcare
topic Review
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9862413/
https://www.ncbi.nlm.nih.gov/pubmed/36679430
http://dx.doi.org/10.3390/s23020634
work_keys_str_mv AT chaddadahmad surveyofexplainableaitechniquesinhealthcare
AT pengjihao surveyofexplainableaitechniquesinhealthcare
AT xujian surveyofexplainableaitechniquesinhealthcare
AT bouridaneahmed surveyofexplainableaitechniquesinhealthcare