Cargando…
Data-Free Adversarial Perturbations for Practical Black-Box Attack
Neural networks are vulnerable to adversarial examples, which are malicious inputs crafted to fool pre-trained models. Adversarial examples often exhibit black-box attacking transferability, which allows that adversarial examples crafted for one model can fool another model. However, existing black-...
Autores principales: | Huan, Zhaoxin, Wang, Yulong, Zhang, Xiaolu, Shang, Lin, Fu, Chilin, Zhou, Jun |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206253/ http://dx.doi.org/10.1007/978-3-030-47436-2_10 |
Ejemplares similares
-
An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning
por: Chen, Zhiyu, et al.
Publicado: (2022) -
A Distributed Black-Box Adversarial Attack Based on Multi-Group Particle Swarm Optimization
por: Suryanto, Naufal, et al.
Publicado: (2020) -
Perturbing BEAMs: EEG adversarial attack to deep learning models for epilepsy diagnosing
por: Yu, Jianfeng, et al.
Publicado: (2023) -
Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples
por: Mahmood, Kaleel, et al.
Publicado: (2021) -
Adversarial attacks and adversarial robustness in computational pathology
por: Ghaffari Laleh, Narmin, et al.
Publicado: (2022)