Cargando…

Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization

Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in...

Descripción completa

Detalles Bibliográficos
Autores principales: Esmaeili, Morteza, Vettukattil, Riyas, Banitalebi, Hasan, Krogh, Nina R., Geitung, Jonn Terje
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8618183/
https://www.ncbi.nlm.nih.gov/pubmed/34834566
http://dx.doi.org/10.3390/jpm11111213
_version_ 1784604686433648640
author Esmaeili, Morteza
Vettukattil, Riyas
Banitalebi, Hasan
Krogh, Nina R.
Geitung, Jonn Terje
author_facet Esmaeili, Morteza
Vettukattil, Riyas
Banitalebi, Hasan
Krogh, Nina R.
Geitung, Jonn Terje
author_sort Esmaeili, Morteza
collection PubMed
description Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.
format Online
Article
Text
id pubmed-8618183
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-86181832021-11-27 Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization Esmaeili, Morteza Vettukattil, Riyas Banitalebi, Hasan Krogh, Nina R. Geitung, Jonn Terje J Pers Med Communication Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods. MDPI 2021-11-16 /pmc/articles/PMC8618183/ /pubmed/34834566 http://dx.doi.org/10.3390/jpm11111213 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Communication
Esmaeili, Morteza
Vettukattil, Riyas
Banitalebi, Hasan
Krogh, Nina R.
Geitung, Jonn Terje
Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
title Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
title_full Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
title_fullStr Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
title_full_unstemmed Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
title_short Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
title_sort explainable artificial intelligence for human-machine interaction in brain tumor localization
topic Communication
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8618183/
https://www.ncbi.nlm.nih.gov/pubmed/34834566
http://dx.doi.org/10.3390/jpm11111213
work_keys_str_mv AT esmaeilimorteza explainableartificialintelligenceforhumanmachineinteractioninbraintumorlocalization
AT vettukattilriyas explainableartificialintelligenceforhumanmachineinteractioninbraintumorlocalization
AT banitalebihasan explainableartificialintelligenceforhumanmachineinteractioninbraintumorlocalization
AT kroghninar explainableartificialintelligenceforhumanmachineinteractioninbraintumorlocalization
AT geitungjonnterje explainableartificialintelligenceforhumanmachineinteractioninbraintumorlocalization