Cargando…
Explainability of deep learning models in medical video analysis: a survey
Deep learning methods have proven to be effective for multiple diagnostic tasks in medicine and have been performing significantly better in comparison to other traditional machine learning methods. However, the black-box nature of deep neural networks has restricted their use in real-world applicat...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280416/ https://www.ncbi.nlm.nih.gov/pubmed/37346619 http://dx.doi.org/10.7717/peerj-cs.1253 |
_version_ | 1785060789580726272 |
---|---|
author | Kolarik, Michal Sarnovsky, Martin Paralic, Jan Babic, Frantisek |
author_facet | Kolarik, Michal Sarnovsky, Martin Paralic, Jan Babic, Frantisek |
author_sort | Kolarik, Michal |
collection | PubMed |
description | Deep learning methods have proven to be effective for multiple diagnostic tasks in medicine and have been performing significantly better in comparison to other traditional machine learning methods. However, the black-box nature of deep neural networks has restricted their use in real-world applications, especially in healthcare. Therefore, explainability of the machine learning models, which focuses on providing of the comprehensible explanations of model outputs, may affect the possibility of adoption of such models in clinical use. There are various studies reviewing approaches to explainability in multiple domains. This article provides a review of the current approaches and applications of explainable deep learning for a specific area of medical data analysis—medical video processing tasks. The article introduces the field of explainable AI and summarizes the most important requirements for explainability in medical applications. Subsequently, we provide an overview of existing methods, evaluation metrics and focus more on those that can be applied to analytical tasks involving the processing of video data in the medical domain. Finally we identify some of the open research issues in the analysed area. |
format | Online Article Text |
id | pubmed-10280416 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | PeerJ Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-102804162023-06-21 Explainability of deep learning models in medical video analysis: a survey Kolarik, Michal Sarnovsky, Martin Paralic, Jan Babic, Frantisek PeerJ Comput Sci Bioinformatics Deep learning methods have proven to be effective for multiple diagnostic tasks in medicine and have been performing significantly better in comparison to other traditional machine learning methods. However, the black-box nature of deep neural networks has restricted their use in real-world applications, especially in healthcare. Therefore, explainability of the machine learning models, which focuses on providing of the comprehensible explanations of model outputs, may affect the possibility of adoption of such models in clinical use. There are various studies reviewing approaches to explainability in multiple domains. This article provides a review of the current approaches and applications of explainable deep learning for a specific area of medical data analysis—medical video processing tasks. The article introduces the field of explainable AI and summarizes the most important requirements for explainability in medical applications. Subsequently, we provide an overview of existing methods, evaluation metrics and focus more on those that can be applied to analytical tasks involving the processing of video data in the medical domain. Finally we identify some of the open research issues in the analysed area. PeerJ Inc. 2023-03-14 /pmc/articles/PMC10280416/ /pubmed/37346619 http://dx.doi.org/10.7717/peerj-cs.1253 Text en ©2023 Kolarik et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited. |
spellingShingle | Bioinformatics Kolarik, Michal Sarnovsky, Martin Paralic, Jan Babic, Frantisek Explainability of deep learning models in medical video analysis: a survey |
title | Explainability of deep learning models in medical video analysis: a survey |
title_full | Explainability of deep learning models in medical video analysis: a survey |
title_fullStr | Explainability of deep learning models in medical video analysis: a survey |
title_full_unstemmed | Explainability of deep learning models in medical video analysis: a survey |
title_short | Explainability of deep learning models in medical video analysis: a survey |
title_sort | explainability of deep learning models in medical video analysis: a survey |
topic | Bioinformatics |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280416/ https://www.ncbi.nlm.nih.gov/pubmed/37346619 http://dx.doi.org/10.7717/peerj-cs.1253 |
work_keys_str_mv | AT kolarikmichal explainabilityofdeeplearningmodelsinmedicalvideoanalysisasurvey AT sarnovskymartin explainabilityofdeeplearningmodelsinmedicalvideoanalysisasurvey AT paralicjan explainabilityofdeeplearningmodelsinmedicalvideoanalysisasurvey AT babicfrantisek explainabilityofdeeplearningmodelsinmedicalvideoanalysisasurvey |