Cargando…
Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis
The accurate automatic segmentation of gliomas and its intra-tumoral structures is important not only for treatment planning but also for follow-up evaluations. Several methods based on 2D and 3D Deep Neural Networks (DNN) have been developed to segment brain tumors and to classify different categor...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7025464/ https://www.ncbi.nlm.nih.gov/pubmed/32116620 http://dx.doi.org/10.3389/fncom.2020.00006 |
_version_ | 1783498515203227648 |
---|---|
author | Natekar, Parth Kori, Avinash Krishnamurthi, Ganapathy |
author_facet | Natekar, Parth Kori, Avinash Krishnamurthi, Ganapathy |
author_sort | Natekar, Parth |
collection | PubMed |
description | The accurate automatic segmentation of gliomas and its intra-tumoral structures is important not only for treatment planning but also for follow-up evaluations. Several methods based on 2D and 3D Deep Neural Networks (DNN) have been developed to segment brain tumors and to classify different categories of tumors from different MRI modalities. However, these networks are often black-box models and do not provide any evidence regarding the process they take to perform this task. Increasing transparency and interpretability of such deep learning techniques is necessary for the complete integration of such methods into medical practice. In this paper, we explore various techniques to explain the functional organization of brain tumor segmentation models and to extract visualizations of internal concepts to understand how these networks achieve highly accurate tumor segmentations. We use the BraTS 2018 dataset to train three different networks with standard architectures and outline similarities and differences in the process that these networks take to segment brain tumors. We show that brain tumor segmentation networks learn certain human-understandable disentangled concepts on a filter level. We also show that they take a top-down or hierarchical approach to localizing the different parts of the tumor. We then extract visualizations of some internal feature maps and also provide a measure of uncertainty with regards to the outputs of the models to give additional qualitative evidence about the predictions of these networks. We believe that the emergence of such human-understandable organization and concepts might aid in the acceptance and integration of such methods in medical diagnosis. |
format | Online Article Text |
id | pubmed-7025464 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-70254642020-02-28 Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis Natekar, Parth Kori, Avinash Krishnamurthi, Ganapathy Front Comput Neurosci Neuroscience The accurate automatic segmentation of gliomas and its intra-tumoral structures is important not only for treatment planning but also for follow-up evaluations. Several methods based on 2D and 3D Deep Neural Networks (DNN) have been developed to segment brain tumors and to classify different categories of tumors from different MRI modalities. However, these networks are often black-box models and do not provide any evidence regarding the process they take to perform this task. Increasing transparency and interpretability of such deep learning techniques is necessary for the complete integration of such methods into medical practice. In this paper, we explore various techniques to explain the functional organization of brain tumor segmentation models and to extract visualizations of internal concepts to understand how these networks achieve highly accurate tumor segmentations. We use the BraTS 2018 dataset to train three different networks with standard architectures and outline similarities and differences in the process that these networks take to segment brain tumors. We show that brain tumor segmentation networks learn certain human-understandable disentangled concepts on a filter level. We also show that they take a top-down or hierarchical approach to localizing the different parts of the tumor. We then extract visualizations of some internal feature maps and also provide a measure of uncertainty with regards to the outputs of the models to give additional qualitative evidence about the predictions of these networks. We believe that the emergence of such human-understandable organization and concepts might aid in the acceptance and integration of such methods in medical diagnosis. Frontiers Media S.A. 2020-02-07 /pmc/articles/PMC7025464/ /pubmed/32116620 http://dx.doi.org/10.3389/fncom.2020.00006 Text en Copyright © 2020 Natekar, Kori and Krishnamurthi. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Natekar, Parth Kori, Avinash Krishnamurthi, Ganapathy Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis |
title | Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis |
title_full | Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis |
title_fullStr | Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis |
title_full_unstemmed | Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis |
title_short | Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis |
title_sort | demystifying brain tumor segmentation networks: interpretability and uncertainty analysis |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7025464/ https://www.ncbi.nlm.nih.gov/pubmed/32116620 http://dx.doi.org/10.3389/fncom.2020.00006 |
work_keys_str_mv | AT natekarparth demystifyingbraintumorsegmentationnetworksinterpretabilityanduncertaintyanalysis AT koriavinash demystifyingbraintumorsegmentationnetworksinterpretabilityanduncertaintyanalysis AT krishnamurthiganapathy demystifyingbraintumorsegmentationnetworksinterpretabilityanduncertaintyanalysis |