Cargando…

Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology

Digital histopathology poses several challenges such as label noise, class imbalance, limited availability of labelled data, and several latent biases to deep learning, negatively influencing transparency, reproducibility, and classification performance. In particular, biases are well known to cause...

Descripción completa

Detalles Bibliográficos
Autores principales: Sauter, Daniel, Lodde, Georg, Nensa, Felix, Schadendorf, Dirk, Livingstone, Elisabeth, Kukuk, Markus
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9319808/
https://www.ncbi.nlm.nih.gov/pubmed/35891026
http://dx.doi.org/10.3390/s22145346
_version_ 1784755640225234944
author Sauter, Daniel
Lodde, Georg
Nensa, Felix
Schadendorf, Dirk
Livingstone, Elisabeth
Kukuk, Markus
author_facet Sauter, Daniel
Lodde, Georg
Nensa, Felix
Schadendorf, Dirk
Livingstone, Elisabeth
Kukuk, Markus
author_sort Sauter, Daniel
collection PubMed
description Digital histopathology poses several challenges such as label noise, class imbalance, limited availability of labelled data, and several latent biases to deep learning, negatively influencing transparency, reproducibility, and classification performance. In particular, biases are well known to cause poor generalization. Proposed tools from explainable artificial intelligence (XAI), bias detection, and bias discovery suffer from technical challenges, complexity, unintuitive usage, inherent biases, or a semantic gap. A promising XAI method, not studied in the context of digital histopathology is automated concept-based explanation (ACE). It automatically extracts visual concepts from image data. Our objective is to evaluate ACE’s technical validity following design science principals and to compare it to Guided Gradient-weighted Class Activation Mapping (Grad-CAM), a conventional pixel-wise explanation method. To that extent, we created and studied five convolutional neural networks (CNNs) in four different skin cancer settings. Our results demonstrate that ACE is a valid tool for gaining insights into the decision process of histopathological CNNs that can go beyond explanations from the control method. ACE validly visualized a class sampling ratio bias, measurement bias, sampling bias, and class-correlated bias. Furthermore, the complementary use with Guided Grad-CAM offers several benefits. Finally, we propose practical solutions for several technical challenges. In contradiction to results from the literature, we noticed lower intuitiveness in some dermatopathology scenarios as compared to concept-based explanations on real-world images.
format Online
Article
Text
id pubmed-9319808
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-93198082022-07-27 Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology Sauter, Daniel Lodde, Georg Nensa, Felix Schadendorf, Dirk Livingstone, Elisabeth Kukuk, Markus Sensors (Basel) Article Digital histopathology poses several challenges such as label noise, class imbalance, limited availability of labelled data, and several latent biases to deep learning, negatively influencing transparency, reproducibility, and classification performance. In particular, biases are well known to cause poor generalization. Proposed tools from explainable artificial intelligence (XAI), bias detection, and bias discovery suffer from technical challenges, complexity, unintuitive usage, inherent biases, or a semantic gap. A promising XAI method, not studied in the context of digital histopathology is automated concept-based explanation (ACE). It automatically extracts visual concepts from image data. Our objective is to evaluate ACE’s technical validity following design science principals and to compare it to Guided Gradient-weighted Class Activation Mapping (Grad-CAM), a conventional pixel-wise explanation method. To that extent, we created and studied five convolutional neural networks (CNNs) in four different skin cancer settings. Our results demonstrate that ACE is a valid tool for gaining insights into the decision process of histopathological CNNs that can go beyond explanations from the control method. ACE validly visualized a class sampling ratio bias, measurement bias, sampling bias, and class-correlated bias. Furthermore, the complementary use with Guided Grad-CAM offers several benefits. Finally, we propose practical solutions for several technical challenges. In contradiction to results from the literature, we noticed lower intuitiveness in some dermatopathology scenarios as compared to concept-based explanations on real-world images. MDPI 2022-07-18 /pmc/articles/PMC9319808/ /pubmed/35891026 http://dx.doi.org/10.3390/s22145346 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Sauter, Daniel
Lodde, Georg
Nensa, Felix
Schadendorf, Dirk
Livingstone, Elisabeth
Kukuk, Markus
Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology
title Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology
title_full Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology
title_fullStr Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology
title_full_unstemmed Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology
title_short Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology
title_sort validating automatic concept-based explanations for ai-based digital histopathology
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9319808/
https://www.ncbi.nlm.nih.gov/pubmed/35891026
http://dx.doi.org/10.3390/s22145346
work_keys_str_mv AT sauterdaniel validatingautomaticconceptbasedexplanationsforaibaseddigitalhistopathology
AT loddegeorg validatingautomaticconceptbasedexplanationsforaibaseddigitalhistopathology
AT nensafelix validatingautomaticconceptbasedexplanationsforaibaseddigitalhistopathology
AT schadendorfdirk validatingautomaticconceptbasedexplanationsforaibaseddigitalhistopathology
AT livingstoneelisabeth validatingautomaticconceptbasedexplanationsforaibaseddigitalhistopathology
AT kukukmarkus validatingautomaticconceptbasedexplanationsforaibaseddigitalhistopathology