Cargando…

Failure Detection in Deep Neural Networks for Medical Imaging

Deep neural networks (DNNs) have started to find their role in the modern healthcare system. DNNs are being developed for diagnosis, prognosis, treatment planning, and outcome prediction for various diseases. With the increasing number of applications of DNNs in modern healthcare, their trustworthin...

Descripción completa

Detalles Bibliográficos
Autores principales: Ahmed, Sabeen, Dera, Dimah, Hassan, Saud Ul, Bouaynaya, Nidhal, Rasool, Ghulam
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9359318/
https://www.ncbi.nlm.nih.gov/pubmed/35958121
http://dx.doi.org/10.3389/fmedt.2022.919046
_version_ 1784764118574563328
author Ahmed, Sabeen
Dera, Dimah
Hassan, Saud Ul
Bouaynaya, Nidhal
Rasool, Ghulam
author_facet Ahmed, Sabeen
Dera, Dimah
Hassan, Saud Ul
Bouaynaya, Nidhal
Rasool, Ghulam
author_sort Ahmed, Sabeen
collection PubMed
description Deep neural networks (DNNs) have started to find their role in the modern healthcare system. DNNs are being developed for diagnosis, prognosis, treatment planning, and outcome prediction for various diseases. With the increasing number of applications of DNNs in modern healthcare, their trustworthiness and reliability are becoming increasingly important. An essential aspect of trustworthiness is detecting the performance degradation and failure of deployed DNNs in medical settings. The softmax output values produced by DNNs are not a calibrated measure of model confidence. Softmax probability numbers are generally higher than the actual model confidence. The model confidence-accuracy gap further increases for wrong predictions and noisy inputs. We employ recently proposed Bayesian deep neural networks (BDNNs) to learn uncertainty in the model parameters. These models simultaneously output the predictions and a measure of confidence in the predictions. By testing these models under various noisy conditions, we show that the (learned) predictive confidence is well calibrated. We use these reliable confidence values for monitoring performance degradation and failure detection in DNNs. We propose two different failure detection methods. In the first method, we define a fixed threshold value based on the behavior of the predictive confidence with changing signal-to-noise ratio (SNR) of the test dataset. The second method learns the threshold value with a neural network. The proposed failure detection mechanisms seamlessly abstain from making decisions when the confidence of the BDNN is below the defined threshold and hold the decision for manual review. Resultantly, the accuracy of the models improves on the unseen test samples. We tested our proposed approach on three medical imaging datasets: PathMNIST, DermaMNIST, and OrganAMNIST, under different levels and types of noise. An increase in the noise of the test images increases the number of abstained samples. BDNNs are inherently robust and show more than 10% accuracy improvement with the proposed failure detection methods. The increased number of abstained samples or an abrupt increase in the predictive variance indicates model performance degradation or possible failure. Our work has the potential to improve the trustworthiness of DNNs and enhance user confidence in the model predictions.
format Online
Article
Text
id pubmed-9359318
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-93593182022-08-10 Failure Detection in Deep Neural Networks for Medical Imaging Ahmed, Sabeen Dera, Dimah Hassan, Saud Ul Bouaynaya, Nidhal Rasool, Ghulam Front Med Technol Medical Technology Deep neural networks (DNNs) have started to find their role in the modern healthcare system. DNNs are being developed for diagnosis, prognosis, treatment planning, and outcome prediction for various diseases. With the increasing number of applications of DNNs in modern healthcare, their trustworthiness and reliability are becoming increasingly important. An essential aspect of trustworthiness is detecting the performance degradation and failure of deployed DNNs in medical settings. The softmax output values produced by DNNs are not a calibrated measure of model confidence. Softmax probability numbers are generally higher than the actual model confidence. The model confidence-accuracy gap further increases for wrong predictions and noisy inputs. We employ recently proposed Bayesian deep neural networks (BDNNs) to learn uncertainty in the model parameters. These models simultaneously output the predictions and a measure of confidence in the predictions. By testing these models under various noisy conditions, we show that the (learned) predictive confidence is well calibrated. We use these reliable confidence values for monitoring performance degradation and failure detection in DNNs. We propose two different failure detection methods. In the first method, we define a fixed threshold value based on the behavior of the predictive confidence with changing signal-to-noise ratio (SNR) of the test dataset. The second method learns the threshold value with a neural network. The proposed failure detection mechanisms seamlessly abstain from making decisions when the confidence of the BDNN is below the defined threshold and hold the decision for manual review. Resultantly, the accuracy of the models improves on the unseen test samples. We tested our proposed approach on three medical imaging datasets: PathMNIST, DermaMNIST, and OrganAMNIST, under different levels and types of noise. An increase in the noise of the test images increases the number of abstained samples. BDNNs are inherently robust and show more than 10% accuracy improvement with the proposed failure detection methods. The increased number of abstained samples or an abrupt increase in the predictive variance indicates model performance degradation or possible failure. Our work has the potential to improve the trustworthiness of DNNs and enhance user confidence in the model predictions. Frontiers Media S.A. 2022-07-22 /pmc/articles/PMC9359318/ /pubmed/35958121 http://dx.doi.org/10.3389/fmedt.2022.919046 Text en Copyright © 2022 Ahmed, Dera, Hassan, Bouaynaya and Rasool. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Medical Technology
Ahmed, Sabeen
Dera, Dimah
Hassan, Saud Ul
Bouaynaya, Nidhal
Rasool, Ghulam
Failure Detection in Deep Neural Networks for Medical Imaging
title Failure Detection in Deep Neural Networks for Medical Imaging
title_full Failure Detection in Deep Neural Networks for Medical Imaging
title_fullStr Failure Detection in Deep Neural Networks for Medical Imaging
title_full_unstemmed Failure Detection in Deep Neural Networks for Medical Imaging
title_short Failure Detection in Deep Neural Networks for Medical Imaging
title_sort failure detection in deep neural networks for medical imaging
topic Medical Technology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9359318/
https://www.ncbi.nlm.nih.gov/pubmed/35958121
http://dx.doi.org/10.3389/fmedt.2022.919046
work_keys_str_mv AT ahmedsabeen failuredetectionindeepneuralnetworksformedicalimaging
AT deradimah failuredetectionindeepneuralnetworksformedicalimaging
AT hassansaudul failuredetectionindeepneuralnetworksformedicalimaging
AT bouaynayanidhal failuredetectionindeepneuralnetworksformedicalimaging
AT rasoolghulam failuredetectionindeepneuralnetworksformedicalimaging