Cargando…

Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks

Deep learning models have achieved an impressive performance in a variety of tasks, but they often suffer from overfitting and are vulnerable to adversarial attacks. Previous research has shown that dropout regularization is an effective technique that can improve model generalization and robustness...

Descripción completa

Detalles Bibliográficos
Autores principales: Sardar, Nida, Khan, Sundas, Hintze, Arend, Mehra, Priyanka
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10297406/
https://www.ncbi.nlm.nih.gov/pubmed/37372277
http://dx.doi.org/10.3390/e25060933
_version_ 1785063876204691456
author Sardar, Nida
Khan, Sundas
Hintze, Arend
Mehra, Priyanka
author_facet Sardar, Nida
Khan, Sundas
Hintze, Arend
Mehra, Priyanka
author_sort Sardar, Nida
collection PubMed
description Deep learning models have achieved an impressive performance in a variety of tasks, but they often suffer from overfitting and are vulnerable to adversarial attacks. Previous research has shown that dropout regularization is an effective technique that can improve model generalization and robustness. In this study, we investigate the impact of dropout regularization on the ability of neural networks to withstand adversarial attacks, as well as the degree of “functional smearing” between individual neurons in the network. Functional smearing in this context describes the phenomenon that a neuron or hidden state is involved in multiple functions at the same time. Our findings confirm that dropout regularization can enhance a network’s resistance to adversarial attacks, and this effect is only observable within a specific range of dropout probabilities. Furthermore, our study reveals that dropout regularization significantly increases the distribution of functional smearing across a wide range of dropout rates. However, it is the fraction of networks with lower levels of functional smearing that exhibit greater resilience against adversarial attacks. This suggests that, even though dropout improves robustness to fooling, one should instead try to decrease functional smearing.
format Online
Article
Text
id pubmed-10297406
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102974062023-06-28 Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks Sardar, Nida Khan, Sundas Hintze, Arend Mehra, Priyanka Entropy (Basel) Article Deep learning models have achieved an impressive performance in a variety of tasks, but they often suffer from overfitting and are vulnerable to adversarial attacks. Previous research has shown that dropout regularization is an effective technique that can improve model generalization and robustness. In this study, we investigate the impact of dropout regularization on the ability of neural networks to withstand adversarial attacks, as well as the degree of “functional smearing” between individual neurons in the network. Functional smearing in this context describes the phenomenon that a neuron or hidden state is involved in multiple functions at the same time. Our findings confirm that dropout regularization can enhance a network’s resistance to adversarial attacks, and this effect is only observable within a specific range of dropout probabilities. Furthermore, our study reveals that dropout regularization significantly increases the distribution of functional smearing across a wide range of dropout rates. However, it is the fraction of networks with lower levels of functional smearing that exhibit greater resilience against adversarial attacks. This suggests that, even though dropout improves robustness to fooling, one should instead try to decrease functional smearing. MDPI 2023-06-13 /pmc/articles/PMC10297406/ /pubmed/37372277 http://dx.doi.org/10.3390/e25060933 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Sardar, Nida
Khan, Sundas
Hintze, Arend
Mehra, Priyanka
Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
title Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
title_full Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
title_fullStr Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
title_full_unstemmed Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
title_short Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
title_sort robustness of sparsely distributed representations to adversarial attacks in deep neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10297406/
https://www.ncbi.nlm.nih.gov/pubmed/37372277
http://dx.doi.org/10.3390/e25060933
work_keys_str_mv AT sardarnida robustnessofsparselydistributedrepresentationstoadversarialattacksindeepneuralnetworks
AT khansundas robustnessofsparselydistributedrepresentationstoadversarialattacksindeepneuralnetworks
AT hintzearend robustnessofsparselydistributedrepresentationstoadversarialattacksindeepneuralnetworks
AT mehrapriyanka robustnessofsparselydistributedrepresentationstoadversarialattacksindeepneuralnetworks