Cargando…

Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples

Deep neural network (DNN) architectures are considered to be robust to random perturbations. Nevertheless, it was shown that they could be severely vulnerable to slight but carefully crafted perturbations of the input, termed as adversarial samples. In recent years, numerous studies have been conduc...

Descripción completa

Detalles Bibliográficos
Autores principales: Tuna, Omer Faruk, Catak, Ferhat Ozgur, Eskil, M. Taner
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8856883/
https://www.ncbi.nlm.nih.gov/pubmed/35221776
http://dx.doi.org/10.1007/s11042-022-12132-7