Cargando…
The effect of machine learning explanations on user trust for automated diagnosis of COVID-19
Recent years have seen deep neural networks (DNN) gain widespread acceptance for a range of computer vision tasks that include medical imaging. Motivated by their performance, multiple studies have focused on designing deep convolutional neural network architectures tailored to detect COVID-19 cases...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier Ltd.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9080676/ https://www.ncbi.nlm.nih.gov/pubmed/35551007 http://dx.doi.org/10.1016/j.compbiomed.2022.105587 |
_version_ | 1784702841932218368 |
---|---|
author | Goel, Kanika Sindhgatta, Renuka Kalra, Sumit Goel, Rohan Mutreja, Preeti |
author_facet | Goel, Kanika Sindhgatta, Renuka Kalra, Sumit Goel, Rohan Mutreja, Preeti |
author_sort | Goel, Kanika |
collection | PubMed |
description | Recent years have seen deep neural networks (DNN) gain widespread acceptance for a range of computer vision tasks that include medical imaging. Motivated by their performance, multiple studies have focused on designing deep convolutional neural network architectures tailored to detect COVID-19 cases from chest computerized tomography (CT) images. However, a fundamental challenge of DNN models is their inability to explain the reasoning for a diagnosis. Explainability is essential for medical diagnosis, where understanding the reason for a decision is as important as the decision itself. A variety of algorithms have been proposed that generate explanations and strive to enhance users' trust in DNN models. Yet, the influence of the generated machine learning explanations on clinicians' trust for complex decision tasks in healthcare has not been understood. This study evaluates the quality of explanations generated for a deep learning model that detects COVID-19 based on CT images and examines the influence of the quality of these explanations on clinicians’ trust. First, we collect radiologist-annotated explanations of the CT images for the diagnosis of COVID-19 to create the ground truth. We then compare ground truth explanations with machine learning explanations. Our evaluation shows that the explanations produced. by different algorithms were often correct (high precision) when compared to the radiologist annotated ground truth but a significant number of explanations were missed (significantly lower recall). We further conduct a controlled experiment to study the influence of machine learning explanations on clinicians' trust for the diagnosis of COVID-19. Our findings show that while the clinicians’ trust in automated diagnosis increases with the explanations, their reliance on the diagnosis reduces as clinicians are less likely to rely on algorithms that are not close to human judgement. Clinicians want higher recall of the explanations for a better understanding of an automated diagnosis system. |
format | Online Article Text |
id | pubmed-9080676 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Elsevier Ltd. |
record_format | MEDLINE/PubMed |
spelling | pubmed-90806762022-05-09 The effect of machine learning explanations on user trust for automated diagnosis of COVID-19 Goel, Kanika Sindhgatta, Renuka Kalra, Sumit Goel, Rohan Mutreja, Preeti Comput Biol Med Article Recent years have seen deep neural networks (DNN) gain widespread acceptance for a range of computer vision tasks that include medical imaging. Motivated by their performance, multiple studies have focused on designing deep convolutional neural network architectures tailored to detect COVID-19 cases from chest computerized tomography (CT) images. However, a fundamental challenge of DNN models is their inability to explain the reasoning for a diagnosis. Explainability is essential for medical diagnosis, where understanding the reason for a decision is as important as the decision itself. A variety of algorithms have been proposed that generate explanations and strive to enhance users' trust in DNN models. Yet, the influence of the generated machine learning explanations on clinicians' trust for complex decision tasks in healthcare has not been understood. This study evaluates the quality of explanations generated for a deep learning model that detects COVID-19 based on CT images and examines the influence of the quality of these explanations on clinicians’ trust. First, we collect radiologist-annotated explanations of the CT images for the diagnosis of COVID-19 to create the ground truth. We then compare ground truth explanations with machine learning explanations. Our evaluation shows that the explanations produced. by different algorithms were often correct (high precision) when compared to the radiologist annotated ground truth but a significant number of explanations were missed (significantly lower recall). We further conduct a controlled experiment to study the influence of machine learning explanations on clinicians' trust for the diagnosis of COVID-19. Our findings show that while the clinicians’ trust in automated diagnosis increases with the explanations, their reliance on the diagnosis reduces as clinicians are less likely to rely on algorithms that are not close to human judgement. Clinicians want higher recall of the explanations for a better understanding of an automated diagnosis system. Elsevier Ltd. 2022-07 2022-05-08 /pmc/articles/PMC9080676/ /pubmed/35551007 http://dx.doi.org/10.1016/j.compbiomed.2022.105587 Text en © 2022 Elsevier Ltd. All rights reserved. Since January 2020 Elsevier has created a COVID-19 resource centre with free information in English and Mandarin on the novel coronavirus COVID-19. The COVID-19 resource centre is hosted on Elsevier Connect, the company's public news and information website. Elsevier hereby grants permission to make all its COVID-19-related research that is available on the COVID-19 resource centre - including this research content - immediately available in PubMed Central and other publicly funded repositories, such as the WHO COVID database with rights for unrestricted research re-use and analyses in any form or by any means with acknowledgement of the original source. These permissions are granted for free by Elsevier for as long as the COVID-19 resource centre remains active. |
spellingShingle | Article Goel, Kanika Sindhgatta, Renuka Kalra, Sumit Goel, Rohan Mutreja, Preeti The effect of machine learning explanations on user trust for automated diagnosis of COVID-19 |
title | The effect of machine learning explanations on user trust for automated diagnosis of COVID-19 |
title_full | The effect of machine learning explanations on user trust for automated diagnosis of COVID-19 |
title_fullStr | The effect of machine learning explanations on user trust for automated diagnosis of COVID-19 |
title_full_unstemmed | The effect of machine learning explanations on user trust for automated diagnosis of COVID-19 |
title_short | The effect of machine learning explanations on user trust for automated diagnosis of COVID-19 |
title_sort | effect of machine learning explanations on user trust for automated diagnosis of covid-19 |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9080676/ https://www.ncbi.nlm.nih.gov/pubmed/35551007 http://dx.doi.org/10.1016/j.compbiomed.2022.105587 |
work_keys_str_mv | AT goelkanika theeffectofmachinelearningexplanationsonusertrustforautomateddiagnosisofcovid19 AT sindhgattarenuka theeffectofmachinelearningexplanationsonusertrustforautomateddiagnosisofcovid19 AT kalrasumit theeffectofmachinelearningexplanationsonusertrustforautomateddiagnosisofcovid19 AT goelrohan theeffectofmachinelearningexplanationsonusertrustforautomateddiagnosisofcovid19 AT mutrejapreeti theeffectofmachinelearningexplanationsonusertrustforautomateddiagnosisofcovid19 AT goelkanika effectofmachinelearningexplanationsonusertrustforautomateddiagnosisofcovid19 AT sindhgattarenuka effectofmachinelearningexplanationsonusertrustforautomateddiagnosisofcovid19 AT kalrasumit effectofmachinelearningexplanationsonusertrustforautomateddiagnosisofcovid19 AT goelrohan effectofmachinelearningexplanationsonusertrustforautomateddiagnosisofcovid19 AT mutrejapreeti effectofmachinelearningexplanationsonusertrustforautomateddiagnosisofcovid19 |