Cargando…
Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
Deep learning models have achieved an impressive performance in a variety of tasks, but they often suffer from overfitting and are vulnerable to adversarial attacks. Previous research has shown that dropout regularization is an effective technique that can improve model generalization and robustness...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10297406/ https://www.ncbi.nlm.nih.gov/pubmed/37372277 http://dx.doi.org/10.3390/e25060933 |