Cargando…

Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks

Despite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization pro...

Descripción completa

Detalles Bibliográficos
Autores principales: Karantzas, Nikos, Besier, Emma, Ortega Caro, Josue, Pitkow, Xaq, Tolias, Andreas S., Patel, Ankit B., Anselmi, Fabio
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9314860/
https://www.ncbi.nlm.nih.gov/pubmed/35903397
http://dx.doi.org/10.3389/frai.2022.890016
_version_ 1784754419803357184
author Karantzas, Nikos
Besier, Emma
Ortega Caro, Josue
Pitkow, Xaq
Tolias, Andreas S.
Patel, Ankit B.
Anselmi, Fabio
author_facet Karantzas, Nikos
Besier, Emma
Ortega Caro, Josue
Pitkow, Xaq
Tolias, Andreas S.
Patel, Ankit B.
Anselmi, Fabio
author_sort Karantzas, Nikos
collection PubMed
description Despite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization properties tend to be biased toward processing low frequencies in images. To explore the frequency bias hypothesis further, we develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance. We achieve this by imposing invariance in the loss with respect to such modulations in the input frequencies. We first use our method to test the low-frequency preference hypothesis of adversarially trained or data-augmented networks. Our results suggest that adversarially robust networks indeed exhibit a low-frequency bias but we find this bias is also dependent on directions in frequency space. However, this is not necessarily true for other types of data augmentation. Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place. Surprisingly, images seen through these modulatory masks are not recognizable and resemble texture-like patterns.
format Online
Article
Text
id pubmed-9314860
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-93148602022-07-27 Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks Karantzas, Nikos Besier, Emma Ortega Caro, Josue Pitkow, Xaq Tolias, Andreas S. Patel, Ankit B. Anselmi, Fabio Front Artif Intell Artificial Intelligence Despite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization properties tend to be biased toward processing low frequencies in images. To explore the frequency bias hypothesis further, we develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance. We achieve this by imposing invariance in the loss with respect to such modulations in the input frequencies. We first use our method to test the low-frequency preference hypothesis of adversarially trained or data-augmented networks. Our results suggest that adversarially robust networks indeed exhibit a low-frequency bias but we find this bias is also dependent on directions in frequency space. However, this is not necessarily true for other types of data augmentation. Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place. Surprisingly, images seen through these modulatory masks are not recognizable and resemble texture-like patterns. Frontiers Media S.A. 2022-07-12 /pmc/articles/PMC9314860/ /pubmed/35903397 http://dx.doi.org/10.3389/frai.2022.890016 Text en Copyright © 2022 Karantzas, Besier, Ortega Caro, Pitkow, Tolias, Patel and Anselmi. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Karantzas, Nikos
Besier, Emma
Ortega Caro, Josue
Pitkow, Xaq
Tolias, Andreas S.
Patel, Ankit B.
Anselmi, Fabio
Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks
title Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks
title_full Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks
title_fullStr Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks
title_full_unstemmed Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks
title_short Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks
title_sort understanding robustness and generalization of artificial neural networks through fourier masks
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9314860/
https://www.ncbi.nlm.nih.gov/pubmed/35903397
http://dx.doi.org/10.3389/frai.2022.890016
work_keys_str_mv AT karantzasnikos understandingrobustnessandgeneralizationofartificialneuralnetworksthroughfouriermasks
AT besieremma understandingrobustnessandgeneralizationofartificialneuralnetworksthroughfouriermasks
AT ortegacarojosue understandingrobustnessandgeneralizationofartificialneuralnetworksthroughfouriermasks
AT pitkowxaq understandingrobustnessandgeneralizationofartificialneuralnetworksthroughfouriermasks
AT toliasandreass understandingrobustnessandgeneralizationofartificialneuralnetworksthroughfouriermasks
AT patelankitb understandingrobustnessandgeneralizationofartificialneuralnetworksthroughfouriermasks
AT anselmifabio understandingrobustnessandgeneralizationofartificialneuralnetworksthroughfouriermasks