Cargando…

Adversarial example defense based on image reconstruction

The rapid development of deep neural networks (DNN) has promoted the widespread application of image recognition, natural language processing, and autonomous driving. However, DNN is vulnerable to adversarial examples, such as an input sample with imperceptible perturbation which can easily invalida...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Yu(AUST), Xu, Huan, Pei, Chengfei, Yang, Gaoming
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8725667/
https://www.ncbi.nlm.nih.gov/pubmed/35036533
http://dx.doi.org/10.7717/peerj-cs.811
_version_ 1784626163399786496
author Zhang, Yu(AUST)
Xu, Huan
Pei, Chengfei
Yang, Gaoming
author_facet Zhang, Yu(AUST)
Xu, Huan
Pei, Chengfei
Yang, Gaoming
author_sort Zhang, Yu(AUST)
collection PubMed
description The rapid development of deep neural networks (DNN) has promoted the widespread application of image recognition, natural language processing, and autonomous driving. However, DNN is vulnerable to adversarial examples, such as an input sample with imperceptible perturbation which can easily invalidate the DNN and even deliberately modify the classification results. Therefore, this article proposes a preprocessing defense framework based on image compression reconstruction to achieve adversarial example defense. Firstly, the defense framework performs pixel depth compression on the input image based on the sensitivity of the adversarial example to eliminate adversarial perturbations. Secondly, we use the super-resolution image reconstruction network to restore the image quality and then map the adversarial example to the clean image. Therefore, there is no need to modify the network structure of the classifier model, and it can be easily combined with other defense methods. Finally, we evaluate the algorithm with MNIST, Fashion-MNIST, and CIFAR-10 datasets; the experimental results show that our approach outperforms current techniques in the task of defending against adversarial example attacks.
format Online
Article
Text
id pubmed-8725667
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-87256672022-01-14 Adversarial example defense based on image reconstruction Zhang, Yu(AUST) Xu, Huan Pei, Chengfei Yang, Gaoming PeerJ Comput Sci Artificial Intelligence The rapid development of deep neural networks (DNN) has promoted the widespread application of image recognition, natural language processing, and autonomous driving. However, DNN is vulnerable to adversarial examples, such as an input sample with imperceptible perturbation which can easily invalidate the DNN and even deliberately modify the classification results. Therefore, this article proposes a preprocessing defense framework based on image compression reconstruction to achieve adversarial example defense. Firstly, the defense framework performs pixel depth compression on the input image based on the sensitivity of the adversarial example to eliminate adversarial perturbations. Secondly, we use the super-resolution image reconstruction network to restore the image quality and then map the adversarial example to the clean image. Therefore, there is no need to modify the network structure of the classifier model, and it can be easily combined with other defense methods. Finally, we evaluate the algorithm with MNIST, Fashion-MNIST, and CIFAR-10 datasets; the experimental results show that our approach outperforms current techniques in the task of defending against adversarial example attacks. PeerJ Inc. 2021-12-24 /pmc/articles/PMC8725667/ /pubmed/35036533 http://dx.doi.org/10.7717/peerj-cs.811 Text en ©2021 Zhang et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Artificial Intelligence
Zhang, Yu(AUST)
Xu, Huan
Pei, Chengfei
Yang, Gaoming
Adversarial example defense based on image reconstruction
title Adversarial example defense based on image reconstruction
title_full Adversarial example defense based on image reconstruction
title_fullStr Adversarial example defense based on image reconstruction
title_full_unstemmed Adversarial example defense based on image reconstruction
title_short Adversarial example defense based on image reconstruction
title_sort adversarial example defense based on image reconstruction
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8725667/
https://www.ncbi.nlm.nih.gov/pubmed/35036533
http://dx.doi.org/10.7717/peerj-cs.811
work_keys_str_mv AT zhangyuaust adversarialexampledefensebasedonimagereconstruction
AT xuhuan adversarialexampledefensebasedonimagereconstruction
AT peichengfei adversarialexampledefensebasedonimagereconstruction
AT yanggaoming adversarialexampledefensebasedonimagereconstruction