Cargando…
Backdoor Attack against Face Sketch Synthesis
Deep neural networks (DNNs) are easily exposed to backdoor threats when training with poisoned training samples. Models using backdoor attack have normal performance for benign samples, and possess poor performance for poisoned samples manipulated with pre-defined trigger patterns. Currently, resear...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10378581/ https://www.ncbi.nlm.nih.gov/pubmed/37509921 http://dx.doi.org/10.3390/e25070974 |
_version_ | 1785079803026604032 |
---|---|
author | Zhang, Shengchuan Ye, Suhang |
author_facet | Zhang, Shengchuan Ye, Suhang |
author_sort | Zhang, Shengchuan |
collection | PubMed |
description | Deep neural networks (DNNs) are easily exposed to backdoor threats when training with poisoned training samples. Models using backdoor attack have normal performance for benign samples, and possess poor performance for poisoned samples manipulated with pre-defined trigger patterns. Currently, research on backdoor attacks focuses on image classification and object detection. In this article, we investigated backdoor attacks in facial sketch synthesis, which can be beneficial for many applications, such as animation production and assisting police in searching for suspects. Specifically, we propose a simple yet effective poison-only backdoor attack suitable for generation tasks. We demonstrate that when the backdoor is integrated into the target model via our attack, it can mislead the model to synthesize unacceptable sketches of any photos stamped with the trigger patterns. Extensive experiments are executed on the benchmark datasets. Specifically, the light strokes devised by our backdoor attack strategy can significantly decrease the perceptual quality. However, the FSIM score of light strokes is 68.21% on the CUFS dataset and the FSIM scores of pseudo-sketches generated by FCN, cGAN, and MDAL are 69.35%, 71.53%, and 72.75%, respectively. There is no big difference, which proves the effectiveness of the proposed backdoor attack method. |
format | Online Article Text |
id | pubmed-10378581 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-103785812023-07-29 Backdoor Attack against Face Sketch Synthesis Zhang, Shengchuan Ye, Suhang Entropy (Basel) Article Deep neural networks (DNNs) are easily exposed to backdoor threats when training with poisoned training samples. Models using backdoor attack have normal performance for benign samples, and possess poor performance for poisoned samples manipulated with pre-defined trigger patterns. Currently, research on backdoor attacks focuses on image classification and object detection. In this article, we investigated backdoor attacks in facial sketch synthesis, which can be beneficial for many applications, such as animation production and assisting police in searching for suspects. Specifically, we propose a simple yet effective poison-only backdoor attack suitable for generation tasks. We demonstrate that when the backdoor is integrated into the target model via our attack, it can mislead the model to synthesize unacceptable sketches of any photos stamped with the trigger patterns. Extensive experiments are executed on the benchmark datasets. Specifically, the light strokes devised by our backdoor attack strategy can significantly decrease the perceptual quality. However, the FSIM score of light strokes is 68.21% on the CUFS dataset and the FSIM scores of pseudo-sketches generated by FCN, cGAN, and MDAL are 69.35%, 71.53%, and 72.75%, respectively. There is no big difference, which proves the effectiveness of the proposed backdoor attack method. MDPI 2023-06-25 /pmc/articles/PMC10378581/ /pubmed/37509921 http://dx.doi.org/10.3390/e25070974 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhang, Shengchuan Ye, Suhang Backdoor Attack against Face Sketch Synthesis |
title | Backdoor Attack against Face Sketch Synthesis |
title_full | Backdoor Attack against Face Sketch Synthesis |
title_fullStr | Backdoor Attack against Face Sketch Synthesis |
title_full_unstemmed | Backdoor Attack against Face Sketch Synthesis |
title_short | Backdoor Attack against Face Sketch Synthesis |
title_sort | backdoor attack against face sketch synthesis |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10378581/ https://www.ncbi.nlm.nih.gov/pubmed/37509921 http://dx.doi.org/10.3390/e25070974 |
work_keys_str_mv | AT zhangshengchuan backdoorattackagainstfacesketchsynthesis AT yesuhang backdoorattackagainstfacesketchsynthesis |