Cargando…

Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing

Deep neural networks (DNNs) are proven vulnerable to attack against adversarial examples. Black-box transfer attacks pose a massive threat to AI applications without accessing target models. At present, the most effective black-box attack methods mainly adopt data enhancement methods, such as input...

Descripción completa

Detalles Bibliográficos
Autores principales: Xie, Pengfei, Shi, Shuhao, Yang, Shuai, Qiao, Kai, Liang, Ningning, Wang, Linyuan, Chen, Jian, Hu, Guoen, Yan, Bin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8696674/
https://www.ncbi.nlm.nih.gov/pubmed/34955802
http://dx.doi.org/10.3389/fnbot.2021.784053

Ejemplares similares