Cargando…
A Universal Detection Method for Adversarial Examples and Fake Images
Deep-learning technologies have shown impressive performance on many tasks in recent years. However, there are multiple serious security risks when using deep-learning technologies. For examples, state-of-the-art deep-learning technologies are vulnerable to adversarial examples that make the model’s...
Autores principales: | Lai, Jiewei, Huo, Yantong, Hou, Ruitao, Wang, Xianmin |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9099751/ https://www.ncbi.nlm.nih.gov/pubmed/35591134 http://dx.doi.org/10.3390/s22093445 |
Ejemplares similares
-
Universal adversarial examples and perturbations for quantum
classifiers
por: Gong, Weiyuan, et al.
Publicado: (2021) -
Adversarial example defense based on image reconstruction
por: Zhang, Yu(AUST), et al.
Publicado: (2021) -
Minimum Adversarial Examples
por: Du, Zhenyu, et al.
Publicado: (2022) -
Clustering Approach for Detecting Multiple Types of Adversarial Examples
por: Choi, Seok-Hwan, et al.
Publicado: (2022) -
Image Adversarial Example Generation Method Based on Adaptive Parameter Adjustable Differential Evolution
por: Lin, Zhiyi, et al.
Publicado: (2023)