Cargando…

Data-Free Adversarial Perturbations for Practical Black-Box Attack

Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted to fool pre-trained models. Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model. However, existing black-...

Descripción completa

Detalles Bibliográficos
Autores principales: Huan, Zhaoxin, Wang, Yulong, Zhang, Xiaolu, Shang, Lin, Fu, Chilin, Zhou, Jun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206253/
http://dx.doi.org/10.1007/978-3-030-47436-2_10
_version_ 1783530378471931904
author Huan, Zhaoxin
Wang, Yulong
Zhang, Xiaolu
Shang, Lin
Fu, Chilin
Zhou, Jun
author_facet Huan, Zhaoxin
Wang, Yulong
Zhang, Xiaolu
Shang, Lin
Fu, Chilin
Zhou, Jun
author_sort Huan, Zhaoxin
collection PubMed
description Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted to fool pre-trained models. Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model. However, existing black-box attack methods require samples from the training data distribution to improve the transferability of adversarial examples across different models. Because of the data dependence, fooling ability of adversarial perturbations is only applicable when training data are accessible. In this paper, we present a data-free method for crafting adversarial perturbations that can fool a target model without any knowledge about the training data distribution. In the practical setting of black-box attack scenario where attackers do not have access to target models and training data, our method achieves high fooling rates on target models and outperforms other universal adversarial perturbation methods. Our method empirically shows that current deep learning models are still at a risk even when the attackers do not have access to training data.
format Online
Article
Text
id pubmed-7206253
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-72062532020-05-08 Data-Free Adversarial Perturbations for Practical Black-Box Attack Huan, Zhaoxin Wang, Yulong Zhang, Xiaolu Shang, Lin Fu, Chilin Zhou, Jun Advances in Knowledge Discovery and Data Mining Article Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted to fool pre-trained models. Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model. However, existing black-box attack methods require samples from the training data distribution to improve the transferability of adversarial examples across different models. Because of the data dependence, fooling ability of adversarial perturbations is only applicable when training data are accessible. In this paper, we present a data-free method for crafting adversarial perturbations that can fool a target model without any knowledge about the training data distribution. In the practical setting of black-box attack scenario where attackers do not have access to target models and training data, our method achieves high fooling rates on target models and outperforms other universal adversarial perturbation methods. Our method empirically shows that current deep learning models are still at a risk even when the attackers do not have access to training data. 2020-04-17 /pmc/articles/PMC7206253/ http://dx.doi.org/10.1007/978-3-030-47436-2_10 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Huan, Zhaoxin
Wang, Yulong
Zhang, Xiaolu
Shang, Lin
Fu, Chilin
Zhou, Jun
Data-Free Adversarial Perturbations for Practical Black-Box Attack
title Data-Free Adversarial Perturbations for Practical Black-Box Attack
title_full Data-Free Adversarial Perturbations for Practical Black-Box Attack
title_fullStr Data-Free Adversarial Perturbations for Practical Black-Box Attack
title_full_unstemmed Data-Free Adversarial Perturbations for Practical Black-Box Attack
title_short Data-Free Adversarial Perturbations for Practical Black-Box Attack
title_sort data-free adversarial perturbations for practical black-box attack
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206253/
http://dx.doi.org/10.1007/978-3-030-47436-2_10
work_keys_str_mv AT huanzhaoxin datafreeadversarialperturbationsforpracticalblackboxattack
AT wangyulong datafreeadversarialperturbationsforpracticalblackboxattack
AT zhangxiaolu datafreeadversarialperturbationsforpracticalblackboxattack
AT shanglin datafreeadversarialperturbationsforpracticalblackboxattack
AT fuchilin datafreeadversarialperturbationsforpracticalblackboxattack
AT zhoujun datafreeadversarialperturbationsforpracticalblackboxattack