Cargando…

Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers

Problem: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented. Motivation: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue. Ai...

Descripción completa

Detalles Bibliográficos
Autores principales: Palatnik de Sousa, Iam, Vellasco, Marley M. B. R., Costa da Silva, Eduardo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8402377/
https://www.ncbi.nlm.nih.gov/pubmed/34451100
http://dx.doi.org/10.3390/s21165657
_version_ 1783745775209021440
author Palatnik de Sousa, Iam
Vellasco, Marley M. B. R.
Costa da Silva, Eduardo
author_facet Palatnik de Sousa, Iam
Vellasco, Marley M. B. R.
Costa da Silva, Eduardo
author_sort Palatnik de Sousa, Iam
collection PubMed
description Problem: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented. Motivation: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue. Aim: For this purpose, several approaches were used in tandem, in order to create a complete overview of the classificatios. Methodology: The techniques used included GradCAM, LIME, RISE, Squaregrid, and direct Gradient approaches (Vanilla, Smooth, Integrated). Main results: Among the deep neural networks architectures evaluated for this image classification task, VGG16 was shown to be most affected by biases towards spurious artifacts, while DenseNet was notably more robust against them. Further impacts: Results further show that small differences in validation accuracies can cause drastic changes in explanation heatmaps for DenseNet architectures, indicating that small changes in validation accuracy may have large impacts on the biases learned by the networks. Notably, it is important to notice that the strong performance metrics achieved by all these networks (Accuracy, F1 score, AUC all in the 80 to 90% range) could give users the erroneous impression that there is no bias. However, the analysis of the explanation heatmaps highlights the bias.
format Online
Article
Text
id pubmed-8402377
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-84023772021-08-29 Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers Palatnik de Sousa, Iam Vellasco, Marley M. B. R. Costa da Silva, Eduardo Sensors (Basel) Article Problem: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented. Motivation: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue. Aim: For this purpose, several approaches were used in tandem, in order to create a complete overview of the classificatios. Methodology: The techniques used included GradCAM, LIME, RISE, Squaregrid, and direct Gradient approaches (Vanilla, Smooth, Integrated). Main results: Among the deep neural networks architectures evaluated for this image classification task, VGG16 was shown to be most affected by biases towards spurious artifacts, while DenseNet was notably more robust against them. Further impacts: Results further show that small differences in validation accuracies can cause drastic changes in explanation heatmaps for DenseNet architectures, indicating that small changes in validation accuracy may have large impacts on the biases learned by the networks. Notably, it is important to notice that the strong performance metrics achieved by all these networks (Accuracy, F1 score, AUC all in the 80 to 90% range) could give users the erroneous impression that there is no bias. However, the analysis of the explanation heatmaps highlights the bias. MDPI 2021-08-23 /pmc/articles/PMC8402377/ /pubmed/34451100 http://dx.doi.org/10.3390/s21165657 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Palatnik de Sousa, Iam
Vellasco, Marley M. B. R.
Costa da Silva, Eduardo
Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers
title Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers
title_full Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers
title_fullStr Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers
title_full_unstemmed Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers
title_short Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers
title_sort explainable artificial intelligence for bias detection in covid ct-scan classifiers
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8402377/
https://www.ncbi.nlm.nih.gov/pubmed/34451100
http://dx.doi.org/10.3390/s21165657
work_keys_str_mv AT palatnikdesousaiam explainableartificialintelligenceforbiasdetectionincovidctscanclassifiers
AT vellascomarleymbr explainableartificialintelligenceforbiasdetectionincovidctscanclassifiers
AT costadasilvaeduardo explainableartificialintelligenceforbiasdetectionincovidctscanclassifiers