Cargando…

Data-Free Adversarial Perturbations for Practical Black-Box Attack

Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted to fool pre-trained models. Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model. However, existing black-...

Descripción completa

Detalles Bibliográficos
Autores principales: Huan, Zhaoxin, Wang, Yulong, Zhang, Xiaolu, Shang, Lin, Fu, Chilin, Zhou, Jun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206253/
http://dx.doi.org/10.1007/978-3-030-47436-2_10

Ejemplares similares