Cargando…
ApaNet: adversarial perturbations alleviation network for face verification
Albeit Deep neural networks (DNNs) are widely used in computer vision, natural language processing and speech recognition, they have been discovered to be fragile to adversarial attacks. Specifically, in computer vision, an attacker can easily deceive DNNs by contaminating an input image with pertur...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9395815/ https://www.ncbi.nlm.nih.gov/pubmed/36035322 http://dx.doi.org/10.1007/s11042-022-13641-1 |
_version_ | 1784771786547658752 |
---|---|
author | Sun, Guangling Hu, Haoqi Su, Yuying Liu, Qi Lu, Xiaofeng |
author_facet | Sun, Guangling Hu, Haoqi Su, Yuying Liu, Qi Lu, Xiaofeng |
author_sort | Sun, Guangling |
collection | PubMed |
description | Albeit Deep neural networks (DNNs) are widely used in computer vision, natural language processing and speech recognition, they have been discovered to be fragile to adversarial attacks. Specifically, in computer vision, an attacker can easily deceive DNNs by contaminating an input image with perturbations imperceptible to humans. As one of the important vision tasks, face verification is also subject to adversarial attack. Thus, in this paper, we focus on defending against the adversarial attack for face verification to mitigate the potential risk. We learn a network via an implementation of stacked residual blocks, namely adversarial perturbations alleviation network (ApaNet), to alleviate latent adversarial perturbations hidden in the input facial image. During the supervised learning of ApaNet, only the Labeled Faces in the Wild (LFW) is used as the training set, and the legitimate examples and corresponding adversarial examples produced by projected gradient descent algorithm compose supervision and inputs respectively. By leveraging the middle and high layer’s activation of FaceNet, the discrepancy between an image output by ApaNet and the supervision is calculated as the loss function to optimize ApaNet. Empirical experiment results on the LFW, YouTube Faces DB and CASIA-FaceV5 confirm the effectiveness of the proposed defender against some representative white-box and black-box adversarial attacks. Also, experimental results show the superiority performance of the ApaNet as comparing with several currently available techniques. |
format | Online Article Text |
id | pubmed-9395815 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-93958152022-08-23 ApaNet: adversarial perturbations alleviation network for face verification Sun, Guangling Hu, Haoqi Su, Yuying Liu, Qi Lu, Xiaofeng Multimed Tools Appl Article Albeit Deep neural networks (DNNs) are widely used in computer vision, natural language processing and speech recognition, they have been discovered to be fragile to adversarial attacks. Specifically, in computer vision, an attacker can easily deceive DNNs by contaminating an input image with perturbations imperceptible to humans. As one of the important vision tasks, face verification is also subject to adversarial attack. Thus, in this paper, we focus on defending against the adversarial attack for face verification to mitigate the potential risk. We learn a network via an implementation of stacked residual blocks, namely adversarial perturbations alleviation network (ApaNet), to alleviate latent adversarial perturbations hidden in the input facial image. During the supervised learning of ApaNet, only the Labeled Faces in the Wild (LFW) is used as the training set, and the legitimate examples and corresponding adversarial examples produced by projected gradient descent algorithm compose supervision and inputs respectively. By leveraging the middle and high layer’s activation of FaceNet, the discrepancy between an image output by ApaNet and the supervision is calculated as the loss function to optimize ApaNet. Empirical experiment results on the LFW, YouTube Faces DB and CASIA-FaceV5 confirm the effectiveness of the proposed defender against some representative white-box and black-box adversarial attacks. Also, experimental results show the superiority performance of the ApaNet as comparing with several currently available techniques. Springer US 2022-08-23 2023 /pmc/articles/PMC9395815/ /pubmed/36035322 http://dx.doi.org/10.1007/s11042-022-13641-1 Text en © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022, Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Sun, Guangling Hu, Haoqi Su, Yuying Liu, Qi Lu, Xiaofeng ApaNet: adversarial perturbations alleviation network for face verification |
title | ApaNet: adversarial perturbations alleviation network for face verification |
title_full | ApaNet: adversarial perturbations alleviation network for face verification |
title_fullStr | ApaNet: adversarial perturbations alleviation network for face verification |
title_full_unstemmed | ApaNet: adversarial perturbations alleviation network for face verification |
title_short | ApaNet: adversarial perturbations alleviation network for face verification |
title_sort | apanet: adversarial perturbations alleviation network for face verification |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9395815/ https://www.ncbi.nlm.nih.gov/pubmed/36035322 http://dx.doi.org/10.1007/s11042-022-13641-1 |
work_keys_str_mv | AT sunguangling apanetadversarialperturbationsalleviationnetworkforfaceverification AT huhaoqi apanetadversarialperturbationsalleviationnetworkforfaceverification AT suyuying apanetadversarialperturbationsalleviationnetworkforfaceverification AT liuqi apanetadversarialperturbationsalleviationnetworkforfaceverification AT luxiaofeng apanetadversarialperturbationsalleviationnetworkforfaceverification |