Cargando…
Opening the black box of machine learning in radiology: can the proximity of annotated cases be a way?
Machine learning (ML) and deep learning (DL) systems, currently employed in medical image analysis, are data-driven models often considered as black boxes. However, improved transparency is needed to translate automated decision-making to clinical practice. To this aim, we propose a strategy to open...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7200961/ https://www.ncbi.nlm.nih.gov/pubmed/32372200 http://dx.doi.org/10.1186/s41747-020-00159-0 |
Sumario: | Machine learning (ML) and deep learning (DL) systems, currently employed in medical image analysis, are data-driven models often considered as black boxes. However, improved transparency is needed to translate automated decision-making to clinical practice. To this aim, we propose a strategy to open the black box by presenting to the radiologist the annotated cases (ACs) proximal to the current case (CC), making decision rationale and uncertainty more explicit. The ACs, used for training, validation, and testing in supervised methods and for validation and testing in the unsupervised ones, could be provided as support of the ML/DL tool. If the CC is localised in a classification space and proximal ACs are selected by proper metrics, the latter ones could be shown in their original form of images, enriched with annotation to radiologists, thus allowing immediate interpretation of the CC classification. Moreover, the density of ACs in the CC neighbourhood, their image saliency maps, classification confidence, demographics, and clinical information would be available to radiologists. Thus, encrypted information could be transmitted to radiologists, who will know model output (what) and salient image regions (where) enriched by ACs, providing classification rationale (why). Summarising, if a classifier is data-driven, let us make its interpretation data-driven too. |
---|