Cargando…
Generating adversarial examples without specifying a target model
Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work. In a more practical situation, the attacker will be easily detected because of too man...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8459786/ https://www.ncbi.nlm.nih.gov/pubmed/34616888 http://dx.doi.org/10.7717/peerj-cs.702 |
_version_ | 1784571600059760640 |
---|---|
author | Yang, Gaoming Li, Mingwei Fang, Xianjing Zhang, Ji Liang, Xingzhu |
author_facet | Yang, Gaoming Li, Mingwei Fang, Xianjing Zhang, Ji Liang, Xingzhu |
author_sort | Yang, Gaoming |
collection | PubMed |
description | Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work. In a more practical situation, the attacker will be easily detected because of too many queries, and this problem is especially obvious under the black-box setting. To solve the problem, we propose the Attack Without a Target Model (AWTM). Our algorithm does not specify any target model in generating adversarial examples, so it does not need to query the target. Experimental results show that it achieved a maximum attack success rate of 81.78% in the MNIST data set and 87.99% in the CIFAR-10 data set. In addition, it has a low time cost because it is a GAN-based method. |
format | Online Article Text |
id | pubmed-8459786 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | PeerJ Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-84597862021-10-05 Generating adversarial examples without specifying a target model Yang, Gaoming Li, Mingwei Fang, Xianjing Zhang, Ji Liang, Xingzhu PeerJ Comput Sci Artificial Intelligence Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work. In a more practical situation, the attacker will be easily detected because of too many queries, and this problem is especially obvious under the black-box setting. To solve the problem, we propose the Attack Without a Target Model (AWTM). Our algorithm does not specify any target model in generating adversarial examples, so it does not need to query the target. Experimental results show that it achieved a maximum attack success rate of 81.78% in the MNIST data set and 87.99% in the CIFAR-10 data set. In addition, it has a low time cost because it is a GAN-based method. PeerJ Inc. 2021-09-13 /pmc/articles/PMC8459786/ /pubmed/34616888 http://dx.doi.org/10.7717/peerj-cs.702 Text en © 2021 Yang et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited. |
spellingShingle | Artificial Intelligence Yang, Gaoming Li, Mingwei Fang, Xianjing Zhang, Ji Liang, Xingzhu Generating adversarial examples without specifying a target model |
title | Generating adversarial examples without specifying a target model |
title_full | Generating adversarial examples without specifying a target model |
title_fullStr | Generating adversarial examples without specifying a target model |
title_full_unstemmed | Generating adversarial examples without specifying a target model |
title_short | Generating adversarial examples without specifying a target model |
title_sort | generating adversarial examples without specifying a target model |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8459786/ https://www.ncbi.nlm.nih.gov/pubmed/34616888 http://dx.doi.org/10.7717/peerj-cs.702 |
work_keys_str_mv | AT yanggaoming generatingadversarialexampleswithoutspecifyingatargetmodel AT limingwei generatingadversarialexampleswithoutspecifyingatargetmodel AT fangxianjing generatingadversarialexampleswithoutspecifyingatargetmodel AT zhangji generatingadversarialexampleswithoutspecifyingatargetmodel AT liangxingzhu generatingadversarialexampleswithoutspecifyingatargetmodel |