Cargando…
Comparing Detection Schemes for Adversarial Images against Deep Learning Models for Cancer Imaging
SIMPLE SUMMARY: While deep learning has become a powerful tool in analysis of cancer imaging, deep learning models have potential vulnerabilities that pose security threats in the setting of clinical implementation. One weakness of deep learning models is that they can be deceived by adversarial ima...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10000732/ https://www.ncbi.nlm.nih.gov/pubmed/36900339 http://dx.doi.org/10.3390/cancers15051548 |
_version_ | 1784903952296312832 |
---|---|
author | Joel, Marina Z. Avesta, Arman Yang, Daniel X. Zhou, Jian-Ge Omuro, Antonio Herbst, Roy S. Krumholz, Harlan M. Aneja, Sanjay |
author_facet | Joel, Marina Z. Avesta, Arman Yang, Daniel X. Zhou, Jian-Ge Omuro, Antonio Herbst, Roy S. Krumholz, Harlan M. Aneja, Sanjay |
author_sort | Joel, Marina Z. |
collection | PubMed |
description | SIMPLE SUMMARY: While deep learning has become a powerful tool in analysis of cancer imaging, deep learning models have potential vulnerabilities that pose security threats in the setting of clinical implementation. One weakness of deep learning models is that they can be deceived by adversarial images, which are manipulated images that have pixels intentionally perturbed to alter the output of the deep learning model. Recent research has shown that adversarial detection models can differentiate adversarial images from normal images to protect deep learning models from attack. We compared the effectiveness of different adversarial detection schemes, using three cancer imaging datasets (computed tomography, mammography, and magnetic resonance imaging). We found that that the detection schemes demonstrate strong performance overall but exhibit limited efficacy in detecting a subset of adversarial images. We believe our findings provide a useful basis in the application of adversarial defenses to deep learning models for medical images in oncology. ABSTRACT: Deep learning (DL) models have demonstrated state-of-the-art performance in the classification of diagnostic imaging in oncology. However, DL models for medical images can be compromised by adversarial images, where pixel values of input images are manipulated to deceive the DL model. To address this limitation, our study investigates the detectability of adversarial images in oncology using multiple detection schemes. Experiments were conducted on thoracic computed tomography (CT) scans, mammography, and brain magnetic resonance imaging (MRI). For each dataset we trained a convolutional neural network to classify the presence or absence of malignancy. We trained five DL and machine learning (ML)-based detection models and tested their performance in detecting adversarial images. Adversarial images generated using projected gradient descent (PGD) with a perturbation size of 0.004 were detected by the ResNet detection model with an accuracy of 100% for CT, 100% for mammogram, and 90.0% for MRI. Overall, adversarial images were detected with high accuracy in settings where adversarial perturbation was above set thresholds. Adversarial detection should be considered alongside adversarial training as a defense technique to protect DL models for cancer imaging classification from the threat of adversarial images. |
format | Online Article Text |
id | pubmed-10000732 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-100007322023-03-11 Comparing Detection Schemes for Adversarial Images against Deep Learning Models for Cancer Imaging Joel, Marina Z. Avesta, Arman Yang, Daniel X. Zhou, Jian-Ge Omuro, Antonio Herbst, Roy S. Krumholz, Harlan M. Aneja, Sanjay Cancers (Basel) Article SIMPLE SUMMARY: While deep learning has become a powerful tool in analysis of cancer imaging, deep learning models have potential vulnerabilities that pose security threats in the setting of clinical implementation. One weakness of deep learning models is that they can be deceived by adversarial images, which are manipulated images that have pixels intentionally perturbed to alter the output of the deep learning model. Recent research has shown that adversarial detection models can differentiate adversarial images from normal images to protect deep learning models from attack. We compared the effectiveness of different adversarial detection schemes, using three cancer imaging datasets (computed tomography, mammography, and magnetic resonance imaging). We found that that the detection schemes demonstrate strong performance overall but exhibit limited efficacy in detecting a subset of adversarial images. We believe our findings provide a useful basis in the application of adversarial defenses to deep learning models for medical images in oncology. ABSTRACT: Deep learning (DL) models have demonstrated state-of-the-art performance in the classification of diagnostic imaging in oncology. However, DL models for medical images can be compromised by adversarial images, where pixel values of input images are manipulated to deceive the DL model. To address this limitation, our study investigates the detectability of adversarial images in oncology using multiple detection schemes. Experiments were conducted on thoracic computed tomography (CT) scans, mammography, and brain magnetic resonance imaging (MRI). For each dataset we trained a convolutional neural network to classify the presence or absence of malignancy. We trained five DL and machine learning (ML)-based detection models and tested their performance in detecting adversarial images. Adversarial images generated using projected gradient descent (PGD) with a perturbation size of 0.004 were detected by the ResNet detection model with an accuracy of 100% for CT, 100% for mammogram, and 90.0% for MRI. Overall, adversarial images were detected with high accuracy in settings where adversarial perturbation was above set thresholds. Adversarial detection should be considered alongside adversarial training as a defense technique to protect DL models for cancer imaging classification from the threat of adversarial images. MDPI 2023-03-01 /pmc/articles/PMC10000732/ /pubmed/36900339 http://dx.doi.org/10.3390/cancers15051548 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Joel, Marina Z. Avesta, Arman Yang, Daniel X. Zhou, Jian-Ge Omuro, Antonio Herbst, Roy S. Krumholz, Harlan M. Aneja, Sanjay Comparing Detection Schemes for Adversarial Images against Deep Learning Models for Cancer Imaging |
title | Comparing Detection Schemes for Adversarial Images against Deep Learning Models for Cancer Imaging |
title_full | Comparing Detection Schemes for Adversarial Images against Deep Learning Models for Cancer Imaging |
title_fullStr | Comparing Detection Schemes for Adversarial Images against Deep Learning Models for Cancer Imaging |
title_full_unstemmed | Comparing Detection Schemes for Adversarial Images against Deep Learning Models for Cancer Imaging |
title_short | Comparing Detection Schemes for Adversarial Images against Deep Learning Models for Cancer Imaging |
title_sort | comparing detection schemes for adversarial images against deep learning models for cancer imaging |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10000732/ https://www.ncbi.nlm.nih.gov/pubmed/36900339 http://dx.doi.org/10.3390/cancers15051548 |
work_keys_str_mv | AT joelmarinaz comparingdetectionschemesforadversarialimagesagainstdeeplearningmodelsforcancerimaging AT avestaarman comparingdetectionschemesforadversarialimagesagainstdeeplearningmodelsforcancerimaging AT yangdanielx comparingdetectionschemesforadversarialimagesagainstdeeplearningmodelsforcancerimaging AT zhoujiange comparingdetectionschemesforadversarialimagesagainstdeeplearningmodelsforcancerimaging AT omuroantonio comparingdetectionschemesforadversarialimagesagainstdeeplearningmodelsforcancerimaging AT herbstroys comparingdetectionschemesforadversarialimagesagainstdeeplearningmodelsforcancerimaging AT krumholzharlanm comparingdetectionschemesforadversarialimagesagainstdeeplearningmodelsforcancerimaging AT anejasanjay comparingdetectionschemesforadversarialimagesagainstdeeplearningmodelsforcancerimaging |