Cargando…

Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?

Machine learning (ML) and deep learning (DL) systems, currently employed in medical image analysis, are data-driven models often considered as black boxes. However, improved transparency is needed to translate automated decision-making to clinical practice. To this aim, we propose a strategy to open...

Descripción completa

Detalles Bibliográficos
Autores principales: Baselli, Giuseppe, Codari, Marina, Sardanelli, Francesco
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7200961/
https://www.ncbi.nlm.nih.gov/pubmed/32372200
http://dx.doi.org/10.1186/s41747-020-00159-0
_version_ 1783529446169378816
author Baselli, Giuseppe
Codari, Marina
Sardanelli, Francesco
author_facet Baselli, Giuseppe
Codari, Marina
Sardanelli, Francesco
author_sort Baselli, Giuseppe
collection PubMed
description Machine learning (ML) and deep learning (DL) systems, currently employed in medical image analysis, are data-driven models often considered as black boxes. However, improved transparency is needed to translate automated decision-making to clinical practice. To this aim, we propose a strategy to open the black box by presenting to the radiologist the annotated cases (ACs) proximal to the current case (CC), making decision rationale and uncertainty more explicit. The ACs, used for training, validation, and testing in supervised methods and for validation and testing in the unsupervised ones, could be provided as support of the ML/DL tool. If the CC is localised in a classification space and proximal ACs are selected by proper metrics, the latter ones could be shown in their original form of images, enriched with annotation to radiologists, thus allowing immediate interpretation of the CC classification. Moreover, the density of ACs in the CC neighbourhood, their image saliency maps, classification confidence, demographics, and clinical information would be available to radiologists. Thus, encrypted information could be transmitted to radiologists, who will know model output (what) and salient image regions (where) enriched by ACs, providing classification rationale (why). Summarising, if a classifier is data-driven, let us make its interpretation data-driven too.
format Online
Article
Text
id pubmed-7200961
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-72009612020-05-12 Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way? Baselli, Giuseppe Codari, Marina Sardanelli, Francesco Eur Radiol Exp Hypothesis Machine learning (ML) and deep learning (DL) systems, currently employed in medical image analysis, are data-driven models often considered as black boxes. However, improved transparency is needed to translate automated decision-making to clinical practice. To this aim, we propose a strategy to open the black box by presenting to the radiologist the annotated cases (ACs) proximal to the current case (CC), making decision rationale and uncertainty more explicit. The ACs, used for training, validation, and testing in supervised methods and for validation and testing in the unsupervised ones, could be provided as support of the ML/DL tool. If the CC is localised in a classification space and proximal ACs are selected by proper metrics, the latter ones could be shown in their original form of images, enriched with annotation to radiologists, thus allowing immediate interpretation of the CC classification. Moreover, the density of ACs in the CC neighbourhood, their image saliency maps, classification confidence, demographics, and clinical information would be available to radiologists. Thus, encrypted information could be transmitted to radiologists, who will know model output (what) and salient image regions (where) enriched by ACs, providing classification rationale (why). Summarising, if a classifier is data-driven, let us make its interpretation data-driven too. Springer International Publishing 2020-05-05 /pmc/articles/PMC7200961/ /pubmed/32372200 http://dx.doi.org/10.1186/s41747-020-00159-0 Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Hypothesis
Baselli, Giuseppe
Codari, Marina
Sardanelli, Francesco
Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?
title Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?
title_full Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?
title_fullStr Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?
title_full_unstemmed Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?
title_short Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?
title_sort opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?
topic Hypothesis
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7200961/
https://www.ncbi.nlm.nih.gov/pubmed/32372200
http://dx.doi.org/10.1186/s41747-020-00159-0
work_keys_str_mv AT baselligiuseppe openingtheblackboxofmachinelearninginradiologycantheproximityofannotatedcasesbeaway
AT codarimarina openingtheblackboxofmachinelearninginradiologycantheproximityofannotatedcasesbeaway
AT sardanellifrancesco openingtheblackboxofmachinelearninginradiologycantheproximityofannotatedcasesbeaway