Cargando…
Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
Deep learning models have achieved an impressive performance in a variety of tasks, but they often suffer from overfitting and are vulnerable to adversarial attacks. Previous research has shown that dropout regularization is an effective technique that can improve model generalization and robustness...
Autores principales: | Sardar, Nida, Khan, Sundas, Hintze, Arend, Mehra, Priyanka |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10297406/ https://www.ncbi.nlm.nih.gov/pubmed/37372277 http://dx.doi.org/10.3390/e25060933 |
Ejemplares similares
-
Adversarial attacks and adversarial robustness in computational pathology
por: Ghaffari Laleh, Narmin, et al.
Publicado: (2022) -
Universal adversarial attacks on deep neural networks for medical image classification
por: Hirano, Hokuto, et al.
Publicado: (2021) -
Sparse Adversarial Video Attacks via Superpixel-Based Jacobian Computation
por: Du, Zhenyu, et al.
Publicado: (2022) -
Detecting Information Relays in Deep Neural Networks
por: Hintze, Arend, et al.
Publicado: (2023) -
Transferability of features for neural networks links to adversarial attacks and defences
por: Kotyan, Shashank, et al.
Publicado: (2022)