Cargando…

Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities

Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-w...

Descripción completa

Detalles Bibliográficos
Autores principales: Kim, Incheol, Rajaraman, Sivaramakrishnan, Antani, Sameer
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6627892/
https://www.ncbi.nlm.nih.gov/pubmed/30987172
http://dx.doi.org/10.3390/diagnostics9020038
_version_ 1783434839509172224
author Kim, Incheol
Rajaraman, Sivaramakrishnan
Antani, Sameer
author_facet Kim, Incheol
Rajaraman, Sivaramakrishnan
Antani, Sameer
author_sort Kim, Incheol
collection PubMed
description Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images.
format Online
Article
Text
id pubmed-6627892
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-66278922019-07-23 Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities Kim, Incheol Rajaraman, Sivaramakrishnan Antani, Sameer Diagnostics (Basel) Article Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images. MDPI 2019-04-03 /pmc/articles/PMC6627892/ /pubmed/30987172 http://dx.doi.org/10.3390/diagnostics9020038 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Kim, Incheol
Rajaraman, Sivaramakrishnan
Antani, Sameer
Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities
title Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities
title_full Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities
title_fullStr Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities
title_full_unstemmed Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities
title_short Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities
title_sort visual interpretation of convolutional neural network predictions in classifying medical image modalities
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6627892/
https://www.ncbi.nlm.nih.gov/pubmed/30987172
http://dx.doi.org/10.3390/diagnostics9020038
work_keys_str_mv AT kimincheol visualinterpretationofconvolutionalneuralnetworkpredictionsinclassifyingmedicalimagemodalities
AT rajaramansivaramakrishnan visualinterpretationofconvolutionalneuralnetworkpredictionsinclassifyingmedicalimagemodalities
AT antanisameer visualinterpretationofconvolutionalneuralnetworkpredictionsinclassifyingmedicalimagemodalities