Cargando…
Image classification adversarial attack with improved resizing transformation and ensemble models
Convolutional neural networks have achieved great success in computer vision, but incorrect predictions would be output when applying intended perturbations on original input. These human-indistinguishable replicas are called adversarial examples, which on this feature can be used to evaluate networ...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10403174/ https://www.ncbi.nlm.nih.gov/pubmed/37547405 http://dx.doi.org/10.7717/peerj-cs.1475 |
_version_ | 1785085010265505792 |
---|---|
author | Li, Chenwei Zhang, Hengwei Yang, Bo Wang, Jindong |
author_facet | Li, Chenwei Zhang, Hengwei Yang, Bo Wang, Jindong |
author_sort | Li, Chenwei |
collection | PubMed |
description | Convolutional neural networks have achieved great success in computer vision, but incorrect predictions would be output when applying intended perturbations on original input. These human-indistinguishable replicas are called adversarial examples, which on this feature can be used to evaluate network robustness and security. White-box attack success rate is considerable, when already knowing network structure and parameters. But in a black-box attack, the adversarial examples success rate is relatively low and the transferability remains to be improved. This article refers to model augmentation which is derived from data augmentation in training generalizable neural networks, and proposes resizing invariance method. The proposed method introduces improved resizing transformation to achieve model augmentation. In addition, ensemble models are used to generate more transferable adversarial examples. Extensive experiments verify the better performance of this method in comparison to other baseline methods including the original model augmentation method, and the black-box attack success rate is improved on both the normal models and defense models. |
format | Online Article Text |
id | pubmed-10403174 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | PeerJ Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-104031742023-08-05 Image classification adversarial attack with improved resizing transformation and ensemble models Li, Chenwei Zhang, Hengwei Yang, Bo Wang, Jindong PeerJ Comput Sci Artificial Intelligence Convolutional neural networks have achieved great success in computer vision, but incorrect predictions would be output when applying intended perturbations on original input. These human-indistinguishable replicas are called adversarial examples, which on this feature can be used to evaluate network robustness and security. White-box attack success rate is considerable, when already knowing network structure and parameters. But in a black-box attack, the adversarial examples success rate is relatively low and the transferability remains to be improved. This article refers to model augmentation which is derived from data augmentation in training generalizable neural networks, and proposes resizing invariance method. The proposed method introduces improved resizing transformation to achieve model augmentation. In addition, ensemble models are used to generate more transferable adversarial examples. Extensive experiments verify the better performance of this method in comparison to other baseline methods including the original model augmentation method, and the black-box attack success rate is improved on both the normal models and defense models. PeerJ Inc. 2023-07-25 /pmc/articles/PMC10403174/ /pubmed/37547405 http://dx.doi.org/10.7717/peerj-cs.1475 Text en © 2023 Li et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited. |
spellingShingle | Artificial Intelligence Li, Chenwei Zhang, Hengwei Yang, Bo Wang, Jindong Image classification adversarial attack with improved resizing transformation and ensemble models |
title | Image classification adversarial attack with improved resizing transformation and ensemble models |
title_full | Image classification adversarial attack with improved resizing transformation and ensemble models |
title_fullStr | Image classification adversarial attack with improved resizing transformation and ensemble models |
title_full_unstemmed | Image classification adversarial attack with improved resizing transformation and ensemble models |
title_short | Image classification adversarial attack with improved resizing transformation and ensemble models |
title_sort | image classification adversarial attack with improved resizing transformation and ensemble models |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10403174/ https://www.ncbi.nlm.nih.gov/pubmed/37547405 http://dx.doi.org/10.7717/peerj-cs.1475 |
work_keys_str_mv | AT lichenwei imageclassificationadversarialattackwithimprovedresizingtransformationandensemblemodels AT zhanghengwei imageclassificationadversarialattackwithimprovedresizingtransformationandensemblemodels AT yangbo imageclassificationadversarialattackwithimprovedresizingtransformationandensemblemodels AT wangjindong imageclassificationadversarialattackwithimprovedresizingtransformationandensemblemodels |