Cargando…
Guiding the retraining of convolutional neural networks against adversarial inputs
BACKGROUND: When using deep learning models, one of the most critical vulnerabilities is their exposure to adversarial inputs, which can cause wrong decisions (e.g., incorrect classification of an image) with minor perturbations. To address this vulnerability, it becomes necessary to retrain the aff...
Autores principales: | Durán, Francisco, Martínez-Fernández, Silverio, Felderer, Michael, Franch, Xavier |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10495969/ https://www.ncbi.nlm.nih.gov/pubmed/37705636 http://dx.doi.org/10.7717/peerj-cs.1454 |
Ejemplares similares
-
Derivative-free optimization adversarial attacks for graph convolutional networks
por: Yang, Runze, et al.
Publicado: (2021) -
Stock Price Forecasting by a Deep Convolutional Generative Adversarial Network
por: Staffini, Alessio
Publicado: (2022) -
Leveraging Guided Backpropagation to Select Convolutional Neural Networks for Plant Classification
por: Mostafa, Sakib, et al.
Publicado: (2022) -
Generative Adversarial Phonology: Modeling Unsupervised Phonetic and Phonological Learning With Neural Networks
por: Beguš, Gašper
Publicado: (2020) -
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
por: Li, Xingjian, et al.
Publicado: (2022)