Cargando…
Defending against adversarial attacks on Covid-19 classifier: A denoiser-based approach
Covid-19 has posed a serious threat to the existence of the human race. Early detection of the virus is vital to effectively containing the virus and treating the patients. Profound testing methods such as the Real-time reverse transcription-polymerase chain reaction (RT-PCR) test and the Rapid Anti...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9595496/ https://www.ncbi.nlm.nih.gov/pubmed/36311356 http://dx.doi.org/10.1016/j.heliyon.2022.e11209 |
Sumario: | Covid-19 has posed a serious threat to the existence of the human race. Early detection of the virus is vital to effectively containing the virus and treating the patients. Profound testing methods such as the Real-time reverse transcription-polymerase chain reaction (RT-PCR) test and the Rapid Antigen Test (RAT) are being used for detection, but they have their limitations. The need for early detection has led researchers to explore other testing techniques. Deep Neural Network (DNN) models have shown high potential in medical image classification and various models have been built by researchers which exhibit high accuracy for the task of Covid-19 detection using chest X-ray images. However, it is proven that DNNs are inherently susceptible to adversarial inputs, which can compromise the results of the models. In this paper, the adversarial robustness of such Covid-19 classifiers is evaluated by performing common adversarial attacks, which include the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Using these attacks, it is found that the accuracy of the models for Covid-19 samples decreases drastically. In the medical domain, adversarial training is the most widely explored technique to defend against adversarial attacks. However, using this technique requires replacing the original model and retraining it by including adversarial samples. Another defensive technique, High-Level Representation Guided Denoiser (HGD), overcomes this limitation by employing an adversarial filter which is also transferable across models. Moreover, the HGD architecture, being suitable for high-resolution images, makes it a good candidate for medical image applications. In this paper, the HGD architecture has been evaluated as a potential defensive technique for the task of medical image analysis. Experiments carried out show an increased accuracy of up to 82% in the white box setting. However, in the black box setting, the defense completely fails to defend against adversarial samples. |
---|