Cargando…

Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach

Covid-19 has posed a serious threat to the existence of the human race. Early detection of the virus is vital to effectively containing the virus and treating the patients. Profound testing methods such as the Real-time reverse transcription-polymerase chain reaction (RT-PCR) test and the Rapid Anti...

Descripción completa

Detalles Bibliográficos
Autores principales: Kansal, Keshav, Krishna, P Sai, Jain, Parshva B., R, Surya, Honnavalli, Prasad, Eswaran, Sivaraman
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9595496/
https://www.ncbi.nlm.nih.gov/pubmed/36311356
http://dx.doi.org/10.1016/j.heliyon.2022.e11209
_version_ 1784815664555360256
author Kansal, Keshav
Krishna, P Sai
Jain, Parshva B.
R, Surya
Honnavalli, Prasad
Eswaran, Sivaraman
author_facet Kansal, Keshav
Krishna, P Sai
Jain, Parshva B.
R, Surya
Honnavalli, Prasad
Eswaran, Sivaraman
author_sort Kansal, Keshav
collection PubMed
description Covid-19 has posed a serious threat to the existence of the human race. Early detection of the virus is vital to effectively containing the virus and treating the patients. Profound testing methods such as the Real-time reverse transcription-polymerase chain reaction (RT-PCR) test and the Rapid Antigen Test (RAT) are being used for detection, but they have their limitations. The need for early detection has led researchers to explore other testing techniques. Deep Neural Network (DNN) models have shown high potential in medical image classification and various models have been built by researchers which exhibit high accuracy for the task of Covid-19 detection using chest X-ray images. However, it is proven that DNNs are inherently susceptible to adversarial inputs, which can compromise the results of the models. In this paper, the adversarial robustness of such Covid-19 classifiers is evaluated by performing common adversarial attacks, which include the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Using these attacks, it is found that the accuracy of the models for Covid-19 samples decreases drastically. In the medical domain, adversarial training is the most widely explored technique to defend against adversarial attacks. However, using this technique requires replacing the original model and retraining it by including adversarial samples. Another defensive technique, High-Level Representation Guided Denoiser (HGD), overcomes this limitation by employing an adversarial filter which is also transferable across models. Moreover, the HGD architecture, being suitable for high-resolution images, makes it a good candidate for medical image applications. In this paper, the HGD architecture has been evaluated as a potential defensive technique for the task of medical image analysis. Experiments carried out show an increased accuracy of up to 82% in the white box setting. However, in the black box setting, the defense completely fails to defend against adversarial samples.
format Online
Article
Text
id pubmed-9595496
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-95954962022-10-25 Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach Kansal, Keshav Krishna, P Sai Jain, Parshva B. R, Surya Honnavalli, Prasad Eswaran, Sivaraman Heliyon Research Article Covid-19 has posed a serious threat to the existence of the human race. Early detection of the virus is vital to effectively containing the virus and treating the patients. Profound testing methods such as the Real-time reverse transcription-polymerase chain reaction (RT-PCR) test and the Rapid Antigen Test (RAT) are being used for detection, but they have their limitations. The need for early detection has led researchers to explore other testing techniques. Deep Neural Network (DNN) models have shown high potential in medical image classification and various models have been built by researchers which exhibit high accuracy for the task of Covid-19 detection using chest X-ray images. However, it is proven that DNNs are inherently susceptible to adversarial inputs, which can compromise the results of the models. In this paper, the adversarial robustness of such Covid-19 classifiers is evaluated by performing common adversarial attacks, which include the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Using these attacks, it is found that the accuracy of the models for Covid-19 samples decreases drastically. In the medical domain, adversarial training is the most widely explored technique to defend against adversarial attacks. However, using this technique requires replacing the original model and retraining it by including adversarial samples. Another defensive technique, High-Level Representation Guided Denoiser (HGD), overcomes this limitation by employing an adversarial filter which is also transferable across models. Moreover, the HGD architecture, being suitable for high-resolution images, makes it a good candidate for medical image applications. In this paper, the HGD architecture has been evaluated as a potential defensive technique for the task of medical image analysis. Experiments carried out show an increased accuracy of up to 82% in the white box setting. However, in the black box setting, the defense completely fails to defend against adversarial samples. Elsevier 2022-10-22 /pmc/articles/PMC9595496/ /pubmed/36311356 http://dx.doi.org/10.1016/j.heliyon.2022.e11209 Text en © 2022 The Author(s) https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Research Article
Kansal, Keshav
Krishna, P Sai
Jain, Parshva B.
R, Surya
Honnavalli, Prasad
Eswaran, Sivaraman
Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach
title Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach
title_full Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach
title_fullStr Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach
title_full_unstemmed Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach
title_short Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach
title_sort defending against adversarial attacks on covid-19 classifier: a denoiser-based approach
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9595496/
https://www.ncbi.nlm.nih.gov/pubmed/36311356
http://dx.doi.org/10.1016/j.heliyon.2022.e11209
work_keys_str_mv AT kansalkeshav defendingagainstadversarialattacksoncovid19classifieradenoiserbasedapproach
AT krishnapsai defendingagainstadversarialattacksoncovid19classifieradenoiserbasedapproach
AT jainparshvab defendingagainstadversarialattacksoncovid19classifieradenoiserbasedapproach
AT rsurya defendingagainstadversarialattacksoncovid19classifieradenoiserbasedapproach
AT honnavalliprasad defendingagainstadversarialattacksoncovid19classifieradenoiserbasedapproach
AT eswaransivaraman defendingagainstadversarialattacksoncovid19classifieradenoiserbasedapproach