Cargando…

Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing

The opacity of deep learning makes its application challenging in the medical field. Therefore, there is a need to enable explainable artificial intelligence (XAI) in the medical field to ensure that models and their results can be explained in a manner that humans can understand. This study uses a...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Hongjian, Ogasawara, Katsuhiko
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10525184/
https://www.ncbi.nlm.nih.gov/pubmed/37760173
http://dx.doi.org/10.3390/bioengineering10091070
_version_ 1785110722423816192
author Zhang, Hongjian
Ogasawara, Katsuhiko
author_facet Zhang, Hongjian
Ogasawara, Katsuhiko
author_sort Zhang, Hongjian
collection PubMed
description The opacity of deep learning makes its application challenging in the medical field. Therefore, there is a need to enable explainable artificial intelligence (XAI) in the medical field to ensure that models and their results can be explained in a manner that humans can understand. This study uses a high-accuracy computer vision algorithm model to transfer learning to medical text tasks and uses the explanatory visualization method known as gradient-weighted class activation mapping (Grad-CAM) to generate heat maps to ensure that the basis for decision-making can be provided intuitively or via the model. The system comprises four modules: pre-processing, word embedding, classifier, and visualization. We used Word2Vec and BERT to compare word embeddings and use ResNet and 1Dimension convolutional neural networks (CNN) to compare classifiers. Finally, the Bi-LSTM was used to perform text classification for direct comparison. With 25 epochs, the model that used pre-trained ResNet on the formalized text presented the best performance (recall of 90.9%, precision of 91.1%, and an F1 score of 90.2% weighted). This study uses ResNet to process medical texts through Grad-CAM-based explainable artificial intelligence and obtains a high-accuracy classification effect; at the same time, through Grad-CAM visualization, it intuitively shows the words to which the model pays attention when making predictions.
format Online
Article
Text
id pubmed-10525184
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-105251842023-09-28 Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing Zhang, Hongjian Ogasawara, Katsuhiko Bioengineering (Basel) Article The opacity of deep learning makes its application challenging in the medical field. Therefore, there is a need to enable explainable artificial intelligence (XAI) in the medical field to ensure that models and their results can be explained in a manner that humans can understand. This study uses a high-accuracy computer vision algorithm model to transfer learning to medical text tasks and uses the explanatory visualization method known as gradient-weighted class activation mapping (Grad-CAM) to generate heat maps to ensure that the basis for decision-making can be provided intuitively or via the model. The system comprises four modules: pre-processing, word embedding, classifier, and visualization. We used Word2Vec and BERT to compare word embeddings and use ResNet and 1Dimension convolutional neural networks (CNN) to compare classifiers. Finally, the Bi-LSTM was used to perform text classification for direct comparison. With 25 epochs, the model that used pre-trained ResNet on the formalized text presented the best performance (recall of 90.9%, precision of 91.1%, and an F1 score of 90.2% weighted). This study uses ResNet to process medical texts through Grad-CAM-based explainable artificial intelligence and obtains a high-accuracy classification effect; at the same time, through Grad-CAM visualization, it intuitively shows the words to which the model pays attention when making predictions. MDPI 2023-09-10 /pmc/articles/PMC10525184/ /pubmed/37760173 http://dx.doi.org/10.3390/bioengineering10091070 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhang, Hongjian
Ogasawara, Katsuhiko
Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing
title Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing
title_full Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing
title_fullStr Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing
title_full_unstemmed Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing
title_short Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing
title_sort grad-cam-based explainable artificial intelligence related to medical text processing
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10525184/
https://www.ncbi.nlm.nih.gov/pubmed/37760173
http://dx.doi.org/10.3390/bioengineering10091070
work_keys_str_mv AT zhanghongjian gradcambasedexplainableartificialintelligencerelatedtomedicaltextprocessing
AT ogasawarakatsuhiko gradcambasedexplainableartificialintelligencerelatedtomedicaltextprocessing