Cargando…
Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing
Deep neural networks (DNNs) are proven vulnerable to attack against adversarial examples. Black-box transfer attacks pose a massive threat to AI applications without accessing target models. At present, the most effective black-box attack methods mainly adopt data enhancement methods, such as input...
Autores principales: | Xie, Pengfei, Shi, Shuhao, Yang, Shuai, Qiao, Kai, Liang, Ningning, Wang, Linyuan, Chen, Jian, Hu, Guoen, Yan, Bin |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8696674/ https://www.ncbi.nlm.nih.gov/pubmed/34955802 http://dx.doi.org/10.3389/fnbot.2021.784053 |
Ejemplares similares
-
ShapeEditor: A StyleGAN Encoder for Stable and High Fidelity Face Swapping
por: Yang, Shuai, et al.
Publicado: (2022) -
Adversarial robustness in deep neural networks based on variable attributes of the stochastic ensemble model
por: Qin, Ruoxi, et al.
Publicado: (2023) -
Boosting-GNN: Boosting Algorithm for Graph Networks on Imbalanced Node Classification
por: Shi, Shuhao, et al.
Publicado: (2021) -
Improving the adversarial transferability with relational graphs ensemble adversarial attack
por: Pi, Jiatian, et al.
Publicado: (2023) -
Data Augmentation for EEG-Based Emotion Recognition Using Generative Adversarial Networks
por: Bao, Guangcheng, et al.
Publicado: (2021)