Cargando…

ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers

The vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the...

Descripción completa

Detalles Bibliográficos
Autores principales: Cao, Han, Si, Chengxiang, Sun, Qindong, Liu, Yanxiao, Li, Shancang, Gope, Prosanta
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8953161/
https://www.ncbi.nlm.nih.gov/pubmed/35327923
http://dx.doi.org/10.3390/e24030412
_version_ 1784675781428903936
author Cao, Han
Si, Chengxiang
Sun, Qindong
Liu, Yanxiao
Li, Shancang
Gope, Prosanta
author_facet Cao, Han
Si, Chengxiang
Sun, Qindong
Liu, Yanxiao
Li, Shancang
Gope, Prosanta
author_sort Cao, Han
collection PubMed
description The vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the need for a further gradient evaluation and training of the substitute model, which can further improve the chance of task failure caused by adversarial perturbation. In untargeted attacks, the proposed method obtained 100%, 98.6%, and 90.00% success rates on the MNIST, CIFAR-10 and ImageNet datasets, respectively. The experimental results show that the proposed ABCAttack can not only obtain a high attack success rate with fewer queries in the black-box setting, but also break some existing defenses to a large extent, and is not limited by model structure or size, which provides further research directions for deep learning evasion attacks and defenses.
format Online
Article
Text
id pubmed-8953161
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-89531612022-03-26 ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers Cao, Han Si, Chengxiang Sun, Qindong Liu, Yanxiao Li, Shancang Gope, Prosanta Entropy (Basel) Article The vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the need for a further gradient evaluation and training of the substitute model, which can further improve the chance of task failure caused by adversarial perturbation. In untargeted attacks, the proposed method obtained 100%, 98.6%, and 90.00% success rates on the MNIST, CIFAR-10 and ImageNet datasets, respectively. The experimental results show that the proposed ABCAttack can not only obtain a high attack success rate with fewer queries in the black-box setting, but also break some existing defenses to a large extent, and is not limited by model structure or size, which provides further research directions for deep learning evasion attacks and defenses. MDPI 2022-03-15 /pmc/articles/PMC8953161/ /pubmed/35327923 http://dx.doi.org/10.3390/e24030412 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Cao, Han
Si, Chengxiang
Sun, Qindong
Liu, Yanxiao
Li, Shancang
Gope, Prosanta
ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers
title ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers
title_full ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers
title_fullStr ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers
title_full_unstemmed ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers
title_short ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers
title_sort abcattack: a gradient-free optimization black-box attack for fooling deep image classifiers
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8953161/
https://www.ncbi.nlm.nih.gov/pubmed/35327923
http://dx.doi.org/10.3390/e24030412
work_keys_str_mv AT caohan abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers
AT sichengxiang abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers
AT sunqindong abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers
AT liuyanxiao abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers
AT lishancang abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers
AT gopeprosanta abcattackagradientfreeoptimizationblackboxattackforfoolingdeepimageclassifiers