Cargando…
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conduc...
Autores principales: | Tuna, Omer Faruk, Catak, Ferhat Ozgur, Eskil, M. Taner |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8856883/ https://www.ncbi.nlm.nih.gov/pubmed/35221776 http://dx.doi.org/10.1007/s11042-022-12132-7 |
Ejemplares similares
-
Deep learning based Sequential model for malware analysis using Windows exe API Calls
por: Catak, Ferhat Ozgur, et al.
Publicado: (2020) -
Robust adversarial uncertainty quantification for deep learning fine-tuning
por: Ahmed, Usman, et al.
Publicado: (2023) -
Uncertainty, epistemics and active inference
por: Parr, Thomas, et al.
Publicado: (2017) -
Progression of Geographic Atrophy: Epistemic Uncertainties Affecting Mathematical Models and Machine Learning
por: Arslan, Janan, et al.
Publicado: (2021) -
Machine Learning Uncertainties with Adversarial Neural Networks
por: Galler, Peter
Publicado: (2019)