Cargando…
A Universal Detection Method for Adversarial Examples and Fake Images
Deep-learning technologies have shown impressive performance on many tasks in recent years. However, there are multiple serious security risks when using deep-learning technologies. For examples, state-of-the-art deep-learning technologies are vulnerable to adversarial examples that make the model’s...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9099751/ https://www.ncbi.nlm.nih.gov/pubmed/35591134 http://dx.doi.org/10.3390/s22093445 |
_version_ | 1784706683895808000 |
---|---|
author | Lai, Jiewei Huo, Yantong Hou, Ruitao Wang, Xianmin |
author_facet | Lai, Jiewei Huo, Yantong Hou, Ruitao Wang, Xianmin |
author_sort | Lai, Jiewei |
collection | PubMed |
description | Deep-learning technologies have shown impressive performance on many tasks in recent years. However, there are multiple serious security risks when using deep-learning technologies. For examples, state-of-the-art deep-learning technologies are vulnerable to adversarial examples that make the model’s predictions wrong due to some specific subtle perturbation, and these technologies can be abused for the tampering with and forgery of multimedia, i.e., deep forgery. In this paper, we propose a universal detection framework for adversarial examples and fake images. We observe some differences in the distribution of model outputs for normal and adversarial examples (fake images) and train the detector to learn the differences. We perform extensive experiments on the CIFAR10 and CIFAR100 datasets. Experimental results show that the proposed framework has good feasibility and effectiveness in detecting adversarial examples or fake images. Moreover, the proposed framework has good generalizability for the different datasets and model structures. |
format | Online Article Text |
id | pubmed-9099751 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-90997512022-05-14 A Universal Detection Method for Adversarial Examples and Fake Images Lai, Jiewei Huo, Yantong Hou, Ruitao Wang, Xianmin Sensors (Basel) Article Deep-learning technologies have shown impressive performance on many tasks in recent years. However, there are multiple serious security risks when using deep-learning technologies. For examples, state-of-the-art deep-learning technologies are vulnerable to adversarial examples that make the model’s predictions wrong due to some specific subtle perturbation, and these technologies can be abused for the tampering with and forgery of multimedia, i.e., deep forgery. In this paper, we propose a universal detection framework for adversarial examples and fake images. We observe some differences in the distribution of model outputs for normal and adversarial examples (fake images) and train the detector to learn the differences. We perform extensive experiments on the CIFAR10 and CIFAR100 datasets. Experimental results show that the proposed framework has good feasibility and effectiveness in detecting adversarial examples or fake images. Moreover, the proposed framework has good generalizability for the different datasets and model structures. MDPI 2022-04-30 /pmc/articles/PMC9099751/ /pubmed/35591134 http://dx.doi.org/10.3390/s22093445 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Lai, Jiewei Huo, Yantong Hou, Ruitao Wang, Xianmin A Universal Detection Method for Adversarial Examples and Fake Images |
title | A Universal Detection Method for Adversarial Examples and Fake Images |
title_full | A Universal Detection Method for Adversarial Examples and Fake Images |
title_fullStr | A Universal Detection Method for Adversarial Examples and Fake Images |
title_full_unstemmed | A Universal Detection Method for Adversarial Examples and Fake Images |
title_short | A Universal Detection Method for Adversarial Examples and Fake Images |
title_sort | universal detection method for adversarial examples and fake images |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9099751/ https://www.ncbi.nlm.nih.gov/pubmed/35591134 http://dx.doi.org/10.3390/s22093445 |
work_keys_str_mv | AT laijiewei auniversaldetectionmethodforadversarialexamplesandfakeimages AT huoyantong auniversaldetectionmethodforadversarialexamplesandfakeimages AT houruitao auniversaldetectionmethodforadversarialexamplesandfakeimages AT wangxianmin auniversaldetectionmethodforadversarialexamplesandfakeimages AT laijiewei universaldetectionmethodforadversarialexamplesandfakeimages AT huoyantong universaldetectionmethodforadversarialexamplesandfakeimages AT houruitao universaldetectionmethodforadversarialexamplesandfakeimages AT wangxianmin universaldetectionmethodforadversarialexamplesandfakeimages |