Cargando…

Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity

Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations can cause a CNN to make incorrect predictions. Here we provide insight into this brittleness by investigating the representations of models th...

Descripción completa

Detalles Bibliográficos
Autores principales: Kong, Nathan C. L., Margalit, Eshed, Gardner, Justin L., Norcia, Anthony M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8775238/
https://www.ncbi.nlm.nih.gov/pubmed/34995280
http://dx.doi.org/10.1371/journal.pcbi.1009739
_version_ 1784636537445548032
author Kong, Nathan C. L.
Margalit, Eshed
Gardner, Justin L.
Norcia, Anthony M.
author_facet Kong, Nathan C. L.
Margalit, Eshed
Gardner, Justin L.
Norcia, Anthony M.
author_sort Kong, Nathan C. L.
collection PubMed
description Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations can cause a CNN to make incorrect predictions. Here we provide insight into this brittleness by investigating the representations of models that are either robust or not robust to image perturbations. Theory suggests that the robustness of a system to these perturbations could be related to the power law exponent of the eigenspectrum of its set of neural responses, where power law exponents closer to and larger than one would indicate a system that is less susceptible to input perturbations. We show that neural responses in mouse and macaque primary visual cortex (V1) obey the predictions of this theory, where their eigenspectra have power law exponents of at least one. We also find that the eigenspectra of model representations decay slowly relative to those observed in neurophysiology and that robust models have eigenspectra that decay slightly faster and have higher power law exponents than those of non-robust models. The slow decay of the eigenspectra suggests that substantial variance in the model responses is related to the encoding of fine stimulus features. We therefore investigated the spatial frequency tuning of artificial neurons and found that a large proportion of them preferred high spatial frequencies and that robust models had preferred spatial frequency distributions more aligned with the measured spatial frequency distribution of macaque V1 cells. Furthermore, robust models were quantitatively better models of V1 than non-robust models. Our results are consistent with other findings that there is a misalignment between human and machine perception. They also suggest that it may be useful to penalize slow-decaying eigenspectra or to bias models to extract features of lower spatial frequencies during task-optimization in order to improve robustness and V1 neural response predictivity.
format Online
Article
Text
id pubmed-8775238
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-87752382022-01-21 Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity Kong, Nathan C. L. Margalit, Eshed Gardner, Justin L. Norcia, Anthony M. PLoS Comput Biol Research Article Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations can cause a CNN to make incorrect predictions. Here we provide insight into this brittleness by investigating the representations of models that are either robust or not robust to image perturbations. Theory suggests that the robustness of a system to these perturbations could be related to the power law exponent of the eigenspectrum of its set of neural responses, where power law exponents closer to and larger than one would indicate a system that is less susceptible to input perturbations. We show that neural responses in mouse and macaque primary visual cortex (V1) obey the predictions of this theory, where their eigenspectra have power law exponents of at least one. We also find that the eigenspectra of model representations decay slowly relative to those observed in neurophysiology and that robust models have eigenspectra that decay slightly faster and have higher power law exponents than those of non-robust models. The slow decay of the eigenspectra suggests that substantial variance in the model responses is related to the encoding of fine stimulus features. We therefore investigated the spatial frequency tuning of artificial neurons and found that a large proportion of them preferred high spatial frequencies and that robust models had preferred spatial frequency distributions more aligned with the measured spatial frequency distribution of macaque V1 cells. Furthermore, robust models were quantitatively better models of V1 than non-robust models. Our results are consistent with other findings that there is a misalignment between human and machine perception. They also suggest that it may be useful to penalize slow-decaying eigenspectra or to bias models to extract features of lower spatial frequencies during task-optimization in order to improve robustness and V1 neural response predictivity. Public Library of Science 2022-01-07 /pmc/articles/PMC8775238/ /pubmed/34995280 http://dx.doi.org/10.1371/journal.pcbi.1009739 Text en © 2022 Kong et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Kong, Nathan C. L.
Margalit, Eshed
Gardner, Justin L.
Norcia, Anthony M.
Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity
title Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity
title_full Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity
title_fullStr Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity
title_full_unstemmed Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity
title_short Increasing neural network robustness improves match to macaque V1 eigenspectrum, spatial frequency preference and predictivity
title_sort increasing neural network robustness improves match to macaque v1 eigenspectrum, spatial frequency preference and predictivity
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8775238/
https://www.ncbi.nlm.nih.gov/pubmed/34995280
http://dx.doi.org/10.1371/journal.pcbi.1009739
work_keys_str_mv AT kongnathancl increasingneuralnetworkrobustnessimprovesmatchtomacaquev1eigenspectrumspatialfrequencypreferenceandpredictivity
AT margaliteshed increasingneuralnetworkrobustnessimprovesmatchtomacaquev1eigenspectrumspatialfrequencypreferenceandpredictivity
AT gardnerjustinl increasingneuralnetworkrobustnessimprovesmatchtomacaquev1eigenspectrumspatialfrequencypreferenceandpredictivity
AT norciaanthonym increasingneuralnetworkrobustnessimprovesmatchtomacaquev1eigenspectrumspatialfrequencypreferenceandpredictivity