Cargando…

A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features

Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as “black boxes”. Prediction models that provide no insight into how their pred...

Descripción completa

Detalles Bibliográficos
Autores principales: Severn, Cameron, Suresh, Krithika, Görg, Carsten, Choi, Yoon Seong, Jain, Rajan, Ghosh, Debashis
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9318445/
https://www.ncbi.nlm.nih.gov/pubmed/35890885
http://dx.doi.org/10.3390/s22145205
_version_ 1784755292473393152
author Severn, Cameron
Suresh, Krithika
Görg, Carsten
Choi, Yoon Seong
Jain, Rajan
Ghosh, Debashis
author_facet Severn, Cameron
Suresh, Krithika
Görg, Carsten
Choi, Yoon Seong
Jain, Rajan
Ghosh, Debashis
author_sort Severn, Cameron
collection PubMed
description Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as “black boxes”. Prediction models that provide no insight into how their predictions are obtained are difficult to trust for making important clinical decisions, such as medical diagnoses or treatment. Explainable machine learning (XML) methods, such as Shapley values, have made it possible to explain the behavior of ML algorithms and to identify which predictors contribute most to a prediction. Incorporating XML methods into medical software tools has the potential to increase trust in ML-powered predictions and aid physicians in making medical decisions. Specifically, in the field of medical imaging analysis the most used methods for explaining deep learning-based model predictions are saliency maps that highlight important areas of an image. However, they do not provide a straightforward interpretation of which qualities of an image area are important. Here, we describe a novel pipeline for XML imaging that uses radiomics data and Shapley values as tools to explain outcome predictions from complex prediction models built with medical imaging with well-defined predictors. We present a visualization of XML imaging results in a clinician-focused dashboard that can be generalized to various settings. We demonstrate the use of this workflow for developing and explaining a prediction model using MRI data from glioma patients to predict a genetic mutation.
format Online
Article
Text
id pubmed-9318445
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-93184452022-07-27 A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features Severn, Cameron Suresh, Krithika Görg, Carsten Choi, Yoon Seong Jain, Rajan Ghosh, Debashis Sensors (Basel) Article Machine learning (ML) models have been shown to predict the presence of clinical factors from medical imaging with remarkable accuracy. However, these complex models can be difficult to interpret and are often criticized as “black boxes”. Prediction models that provide no insight into how their predictions are obtained are difficult to trust for making important clinical decisions, such as medical diagnoses or treatment. Explainable machine learning (XML) methods, such as Shapley values, have made it possible to explain the behavior of ML algorithms and to identify which predictors contribute most to a prediction. Incorporating XML methods into medical software tools has the potential to increase trust in ML-powered predictions and aid physicians in making medical decisions. Specifically, in the field of medical imaging analysis the most used methods for explaining deep learning-based model predictions are saliency maps that highlight important areas of an image. However, they do not provide a straightforward interpretation of which qualities of an image area are important. Here, we describe a novel pipeline for XML imaging that uses radiomics data and Shapley values as tools to explain outcome predictions from complex prediction models built with medical imaging with well-defined predictors. We present a visualization of XML imaging results in a clinician-focused dashboard that can be generalized to various settings. We demonstrate the use of this workflow for developing and explaining a prediction model using MRI data from glioma patients to predict a genetic mutation. MDPI 2022-07-12 /pmc/articles/PMC9318445/ /pubmed/35890885 http://dx.doi.org/10.3390/s22145205 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Severn, Cameron
Suresh, Krithika
Görg, Carsten
Choi, Yoon Seong
Jain, Rajan
Ghosh, Debashis
A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features
title A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features
title_full A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features
title_fullStr A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features
title_full_unstemmed A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features
title_short A Pipeline for the Implementation and Visualization of Explainable Machine Learning for Medical Imaging Using Radiomics Features
title_sort pipeline for the implementation and visualization of explainable machine learning for medical imaging using radiomics features
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9318445/
https://www.ncbi.nlm.nih.gov/pubmed/35890885
http://dx.doi.org/10.3390/s22145205
work_keys_str_mv AT severncameron apipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures
AT sureshkrithika apipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures
AT gorgcarsten apipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures
AT choiyoonseong apipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures
AT jainrajan apipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures
AT ghoshdebashis apipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures
AT severncameron pipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures
AT sureshkrithika pipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures
AT gorgcarsten pipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures
AT choiyoonseong pipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures
AT jainrajan pipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures
AT ghoshdebashis pipelinefortheimplementationandvisualizationofexplainablemachinelearningformedicalimagingusingradiomicsfeatures