Cargando…
A survey on the interpretability of deep learning in medical diagnosis
Deep learning has demonstrated remarkable performance in the medical domain, with accuracy that rivals or even exceeds that of human experts. However, it has a significant problem that these models are “black-box” structures, which means they are opaque, non-intuitive, and difficult for people to un...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Berlin Heidelberg
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9243744/ https://www.ncbi.nlm.nih.gov/pubmed/35789785 http://dx.doi.org/10.1007/s00530-022-00960-4 |
_version_ | 1784738380617089024 |
---|---|
author | Teng, Qiaoying Liu, Zhe Song, Yuqing Han, Kai Lu, Yang |
author_facet | Teng, Qiaoying Liu, Zhe Song, Yuqing Han, Kai Lu, Yang |
author_sort | Teng, Qiaoying |
collection | PubMed |
description | Deep learning has demonstrated remarkable performance in the medical domain, with accuracy that rivals or even exceeds that of human experts. However, it has a significant problem that these models are “black-box” structures, which means they are opaque, non-intuitive, and difficult for people to understand. This creates a barrier to the application of deep learning models in clinical practice due to lack of interpretability, trust, and transparency. To overcome this problem, several studies on interpretability have been proposed. Therefore, in this paper, we comprehensively review the interpretability of deep learning in medical diagnosis based on the current literature, including some common interpretability methods used in the medical domain, various applications with interpretability for disease diagnosis, prevalent evaluation metrics, and several disease datasets. In addition, the challenges of interpretability and future research directions are also discussed here. To the best of our knowledge, this is the first time that various applications of interpretability methods for disease diagnosis have been summarized. |
format | Online Article Text |
id | pubmed-9243744 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer Berlin Heidelberg |
record_format | MEDLINE/PubMed |
spelling | pubmed-92437442022-06-30 A survey on the interpretability of deep learning in medical diagnosis Teng, Qiaoying Liu, Zhe Song, Yuqing Han, Kai Lu, Yang Multimed Syst Regular Article Deep learning has demonstrated remarkable performance in the medical domain, with accuracy that rivals or even exceeds that of human experts. However, it has a significant problem that these models are “black-box” structures, which means they are opaque, non-intuitive, and difficult for people to understand. This creates a barrier to the application of deep learning models in clinical practice due to lack of interpretability, trust, and transparency. To overcome this problem, several studies on interpretability have been proposed. Therefore, in this paper, we comprehensively review the interpretability of deep learning in medical diagnosis based on the current literature, including some common interpretability methods used in the medical domain, various applications with interpretability for disease diagnosis, prevalent evaluation metrics, and several disease datasets. In addition, the challenges of interpretability and future research directions are also discussed here. To the best of our knowledge, this is the first time that various applications of interpretability methods for disease diagnosis have been summarized. Springer Berlin Heidelberg 2022-06-25 2022 /pmc/articles/PMC9243744/ /pubmed/35789785 http://dx.doi.org/10.1007/s00530-022-00960-4 Text en © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Regular Article Teng, Qiaoying Liu, Zhe Song, Yuqing Han, Kai Lu, Yang A survey on the interpretability of deep learning in medical diagnosis |
title | A survey on the interpretability of deep learning in medical diagnosis |
title_full | A survey on the interpretability of deep learning in medical diagnosis |
title_fullStr | A survey on the interpretability of deep learning in medical diagnosis |
title_full_unstemmed | A survey on the interpretability of deep learning in medical diagnosis |
title_short | A survey on the interpretability of deep learning in medical diagnosis |
title_sort | survey on the interpretability of deep learning in medical diagnosis |
topic | Regular Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9243744/ https://www.ncbi.nlm.nih.gov/pubmed/35789785 http://dx.doi.org/10.1007/s00530-022-00960-4 |
work_keys_str_mv | AT tengqiaoying asurveyontheinterpretabilityofdeeplearninginmedicaldiagnosis AT liuzhe asurveyontheinterpretabilityofdeeplearninginmedicaldiagnosis AT songyuqing asurveyontheinterpretabilityofdeeplearninginmedicaldiagnosis AT hankai asurveyontheinterpretabilityofdeeplearninginmedicaldiagnosis AT luyang asurveyontheinterpretabilityofdeeplearninginmedicaldiagnosis AT tengqiaoying surveyontheinterpretabilityofdeeplearninginmedicaldiagnosis AT liuzhe surveyontheinterpretabilityofdeeplearninginmedicaldiagnosis AT songyuqing surveyontheinterpretabilityofdeeplearninginmedicaldiagnosis AT hankai surveyontheinterpretabilityofdeeplearninginmedicaldiagnosis AT luyang surveyontheinterpretabilityofdeeplearninginmedicaldiagnosis |