Cargando…
ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers
The vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the...
Autores principales: | Cao, Han, Si, Chengxiang, Sun, Qindong, Liu, Yanxiao, Li, Shancang, Gope, Prosanta |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8953161/ https://www.ncbi.nlm.nih.gov/pubmed/35327923 http://dx.doi.org/10.3390/e24030412 |
Ejemplares similares
-
Data-Free Adversarial Perturbations for Practical Black-Box Attack
por: Huan, Zhaoxin, et al.
Publicado: (2020) -
Homomorphic Encryption and Some Black Box Attacks
por: Borovik, Alexandre, et al.
Publicado: (2020) -
FOOL'S HAVEN
Publicado: (1953) -
Fool for love /
por: Shepard, Sam, 1943-2017
Publicado: (1985) -
The Praise of Fools
Publicado: (1916)