Cargando…

Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet

Artificial intelligence models do not provide information about exactly how the predictions are reached. This lack of transparency is a major drawback. Particularly in medical applications, interest in explainable artificial intelligence (XAI), which helps to develop methods of visualizing, explaini...

Descripción completa

Detalles Bibliográficos
Autor principal: Taşcı, Burak
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10000758/
https://www.ncbi.nlm.nih.gov/pubmed/36900004
http://dx.doi.org/10.3390/diagnostics13050859
_version_ 1784903959048093696
author Taşcı, Burak
author_facet Taşcı, Burak
author_sort Taşcı, Burak
collection PubMed
description Artificial intelligence models do not provide information about exactly how the predictions are reached. This lack of transparency is a major drawback. Particularly in medical applications, interest in explainable artificial intelligence (XAI), which helps to develop methods of visualizing, explaining, and analyzing deep learning models, has increased recently. With explainable artificial intelligence, it is possible to understand whether the solutions offered by deep learning techniques are safe. This paper aims to diagnose a fatal disease such as a brain tumor faster and more accurately using XAI methods. In this study, we preferred datasets that are widely used in the literature, such as the four-class kaggle brain tumor dataset (Dataset I) and the three-class figshare brain tumor dataset (Dataset II). To extract features, a pre-trained deep learning model is chosen. DenseNet201 is used as the feature extractor in this case. The proposed automated brain tumor detection model includes five stages. First, training of brain MR images with DenseNet201, the tumor area was segmented with GradCAM. The features were extracted from DenseNet201 trained using the exemplar method. Extracted features were selected with iterative neighborhood component (INCA) feature selector. Finally, the selected features were classified using support vector machine (SVM) with 10-fold cross-validation. An accuracy of 98.65% and 99.97%, were obtained for Datasets I and II, respectively. The proposed model obtained higher performance than the state-of-the-art methods and can be used to aid radiologists in their diagnosis.
format Online
Article
Text
id pubmed-10000758
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100007582023-03-11 Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet Taşcı, Burak Diagnostics (Basel) Article Artificial intelligence models do not provide information about exactly how the predictions are reached. This lack of transparency is a major drawback. Particularly in medical applications, interest in explainable artificial intelligence (XAI), which helps to develop methods of visualizing, explaining, and analyzing deep learning models, has increased recently. With explainable artificial intelligence, it is possible to understand whether the solutions offered by deep learning techniques are safe. This paper aims to diagnose a fatal disease such as a brain tumor faster and more accurately using XAI methods. In this study, we preferred datasets that are widely used in the literature, such as the four-class kaggle brain tumor dataset (Dataset I) and the three-class figshare brain tumor dataset (Dataset II). To extract features, a pre-trained deep learning model is chosen. DenseNet201 is used as the feature extractor in this case. The proposed automated brain tumor detection model includes five stages. First, training of brain MR images with DenseNet201, the tumor area was segmented with GradCAM. The features were extracted from DenseNet201 trained using the exemplar method. Extracted features were selected with iterative neighborhood component (INCA) feature selector. Finally, the selected features were classified using support vector machine (SVM) with 10-fold cross-validation. An accuracy of 98.65% and 99.97%, were obtained for Datasets I and II, respectively. The proposed model obtained higher performance than the state-of-the-art methods and can be used to aid radiologists in their diagnosis. MDPI 2023-02-23 /pmc/articles/PMC10000758/ /pubmed/36900004 http://dx.doi.org/10.3390/diagnostics13050859 Text en © 2023 by the author. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Taşcı, Burak
Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet
title Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet
title_full Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet
title_fullStr Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet
title_full_unstemmed Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet
title_short Attention Deep Feature Extraction from Brain MRIs in Explainable Mode: DGXAINet
title_sort attention deep feature extraction from brain mris in explainable mode: dgxainet
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10000758/
https://www.ncbi.nlm.nih.gov/pubmed/36900004
http://dx.doi.org/10.3390/diagnostics13050859
work_keys_str_mv AT tascıburak attentiondeepfeatureextractionfrombrainmrisinexplainablemodedgxainet