Cargando…
Evaluation of GAN-Based Model for Adversarial Training
Deep learning has been successfully utilized in many applications, but it is vulnerable to adversarial samples. To address this vulnerability, a generative adversarial network (GAN) has been used to train a robust classifier. This paper presents a novel GAN model and its implementation to defend aga...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007326/ https://www.ncbi.nlm.nih.gov/pubmed/36904900 http://dx.doi.org/10.3390/s23052697 |
_version_ | 1784905492939669504 |
---|---|
author | Zhao, Weimin Mahmoud, Qusay H. Alwidian, Sanaa |
author_facet | Zhao, Weimin Mahmoud, Qusay H. Alwidian, Sanaa |
author_sort | Zhao, Weimin |
collection | PubMed |
description | Deep learning has been successfully utilized in many applications, but it is vulnerable to adversarial samples. To address this vulnerability, a generative adversarial network (GAN) has been used to train a robust classifier. This paper presents a novel GAN model and its implementation to defend against L(∞) and L(2) constraint gradient-based adversarial attacks. The proposed model is inspired by some of the related work, but it includes multiple new designs such as a dual generator architecture, four new generator input formulations, and two unique implementations with L(∞) and L(2) norm constraint vector outputs. The new formulations and parameter settings of GAN are proposed and evaluated to address the limitations of adversarial training and defensive GAN training strategies, such as gradient masking and training complexity. Furthermore, the training epoch parameter has been evaluated to determine its effect on the overall training results. The experimental results indicate that the optimal formulation of GAN adversarial training must utilize more gradient information from the target classifier. The results also demonstrate that GANs can overcome gradient masking and produce effective perturbation to augment the data. The model can defend PGD L(2) 128/255 norm perturbation with over 60% accuracy and PGD L(∞) 8/255 norm perturbation with around 45% accuracy. The results have also revealed that robustness can be transferred between the constraints of the proposed model. In addition, a robustness–accuracy tradeoff was discovered, along with overfitting and the generalization capabilities of the generator and classifier. These limitations and ideas for future work will be discussed. |
format | Online Article Text |
id | pubmed-10007326 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-100073262023-03-12 Evaluation of GAN-Based Model for Adversarial Training Zhao, Weimin Mahmoud, Qusay H. Alwidian, Sanaa Sensors (Basel) Article Deep learning has been successfully utilized in many applications, but it is vulnerable to adversarial samples. To address this vulnerability, a generative adversarial network (GAN) has been used to train a robust classifier. This paper presents a novel GAN model and its implementation to defend against L(∞) and L(2) constraint gradient-based adversarial attacks. The proposed model is inspired by some of the related work, but it includes multiple new designs such as a dual generator architecture, four new generator input formulations, and two unique implementations with L(∞) and L(2) norm constraint vector outputs. The new formulations and parameter settings of GAN are proposed and evaluated to address the limitations of adversarial training and defensive GAN training strategies, such as gradient masking and training complexity. Furthermore, the training epoch parameter has been evaluated to determine its effect on the overall training results. The experimental results indicate that the optimal formulation of GAN adversarial training must utilize more gradient information from the target classifier. The results also demonstrate that GANs can overcome gradient masking and produce effective perturbation to augment the data. The model can defend PGD L(2) 128/255 norm perturbation with over 60% accuracy and PGD L(∞) 8/255 norm perturbation with around 45% accuracy. The results have also revealed that robustness can be transferred between the constraints of the proposed model. In addition, a robustness–accuracy tradeoff was discovered, along with overfitting and the generalization capabilities of the generator and classifier. These limitations and ideas for future work will be discussed. MDPI 2023-03-01 /pmc/articles/PMC10007326/ /pubmed/36904900 http://dx.doi.org/10.3390/s23052697 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhao, Weimin Mahmoud, Qusay H. Alwidian, Sanaa Evaluation of GAN-Based Model for Adversarial Training |
title | Evaluation of GAN-Based Model for Adversarial Training |
title_full | Evaluation of GAN-Based Model for Adversarial Training |
title_fullStr | Evaluation of GAN-Based Model for Adversarial Training |
title_full_unstemmed | Evaluation of GAN-Based Model for Adversarial Training |
title_short | Evaluation of GAN-Based Model for Adversarial Training |
title_sort | evaluation of gan-based model for adversarial training |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007326/ https://www.ncbi.nlm.nih.gov/pubmed/36904900 http://dx.doi.org/10.3390/s23052697 |
work_keys_str_mv | AT zhaoweimin evaluationofganbasedmodelforadversarialtraining AT mahmoudqusayh evaluationofganbasedmodelforadversarialtraining AT alwidiansanaa evaluationofganbasedmodelforadversarialtraining |