Cargando…

Improving Adversarial Robustness via Attention and Adversarial Logit Pairing

Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples. In this paper, we develop improved techniques for defending against adversarial examples. First, we propo...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Xingjian, Goodman, Dou, Liu, Ji, Wei, Tao, Dou, Dejing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8829878/
https://www.ncbi.nlm.nih.gov/pubmed/35156010
http://dx.doi.org/10.3389/frai.2021.752831
_version_ 1784648162019901440
author Li, Xingjian
Goodman, Dou
Liu, Ji
Wei, Tao
Dou, Dejing
author_facet Li, Xingjian
Goodman, Dou
Liu, Ji
Wei, Tao
Dou, Dejing
author_sort Li, Xingjian
collection PubMed
description Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples. In this paper, we develop improved techniques for defending against adversarial examples. First, we propose an enhanced defense technique denoted Attention and Adversarial Logit Pairing (AT + ALP), which encourages both attention map and logit for the pairs of examples to be similar. When being applied to clean examples and their adversarial counterparts, AT + ALP improves accuracy on adversarial examples over adversarial training. We show that AT + ALP can effectively increase the average activations of adversarial examples in the key area and demonstrate that it focuses on discriminate features to improve the robustness of the model. Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT + ALP achieves the state of the art defense performance. For example, on 17 Flower Category Database, under strong 200-iteration Projected Gradient Descent (PGD) gray-box and black-box attacks where prior art has 34 and 39% accuracy, our method achieves 50 and 51%. Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation ϵ ∈ {0.25, 0.5} i.e. L ( ∞ ) ∈ {0.25, 0.5} with 10–200 attack iterations. To the best of our knowledge, such a strong attack has not been previously explored on a wide range of datasets.
format Online
Article
Text
id pubmed-8829878
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-88298782022-02-11 Improving Adversarial Robustness via Attention and Adversarial Logit Pairing Li, Xingjian Goodman, Dou Liu, Ji Wei, Tao Dou, Dejing Front Artif Intell Artificial Intelligence Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples. In this paper, we develop improved techniques for defending against adversarial examples. First, we propose an enhanced defense technique denoted Attention and Adversarial Logit Pairing (AT + ALP), which encourages both attention map and logit for the pairs of examples to be similar. When being applied to clean examples and their adversarial counterparts, AT + ALP improves accuracy on adversarial examples over adversarial training. We show that AT + ALP can effectively increase the average activations of adversarial examples in the key area and demonstrate that it focuses on discriminate features to improve the robustness of the model. Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT + ALP achieves the state of the art defense performance. For example, on 17 Flower Category Database, under strong 200-iteration Projected Gradient Descent (PGD) gray-box and black-box attacks where prior art has 34 and 39% accuracy, our method achieves 50 and 51%. Compared with previous work, our work is evaluated under highly challenging PGD attack: the maximum perturbation ϵ ∈ {0.25, 0.5} i.e. L ( ∞ ) ∈ {0.25, 0.5} with 10–200 attack iterations. To the best of our knowledge, such a strong attack has not been previously explored on a wide range of datasets. Frontiers Media S.A. 2022-01-27 /pmc/articles/PMC8829878/ /pubmed/35156010 http://dx.doi.org/10.3389/frai.2021.752831 Text en Copyright © 2022 Li, Goodman, Liu, Wei and Dou. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Li, Xingjian
Goodman, Dou
Liu, Ji
Wei, Tao
Dou, Dejing
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
title Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
title_full Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
title_fullStr Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
title_full_unstemmed Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
title_short Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
title_sort improving adversarial robustness via attention and adversarial logit pairing
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8829878/
https://www.ncbi.nlm.nih.gov/pubmed/35156010
http://dx.doi.org/10.3389/frai.2021.752831
work_keys_str_mv AT lixingjian improvingadversarialrobustnessviaattentionandadversariallogitpairing
AT goodmandou improvingadversarialrobustnessviaattentionandadversariallogitpairing
AT liuji improvingadversarialrobustnessviaattentionandadversariallogitpairing
AT weitao improvingadversarialrobustnessviaattentionandadversariallogitpairing
AT doudejing improvingadversarialrobustnessviaattentionandadversariallogitpairing