Cargando…
Limits to visual representational correspondence between convolutional neural networks and the human brain
Convolutional neural networks (CNNs) are increasingly used to model human vision due to their high object categorization capabilities and general correspondence with human brain responses. Here we evaluate the performance of 14 different CNNs compared with human fMRI responses to natural and artific...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8024324/ https://www.ncbi.nlm.nih.gov/pubmed/33824315 http://dx.doi.org/10.1038/s41467-021-22244-7 |
_version_ | 1783675290316177408 |
---|---|
author | Xu, Yaoda Vaziri-Pashkam, Maryam |
author_facet | Xu, Yaoda Vaziri-Pashkam, Maryam |
author_sort | Xu, Yaoda |
collection | PubMed |
description | Convolutional neural networks (CNNs) are increasingly used to model human vision due to their high object categorization capabilities and general correspondence with human brain responses. Here we evaluate the performance of 14 different CNNs compared with human fMRI responses to natural and artificial images using representational similarity analysis. Despite the presence of some CNN-brain correspondence and CNNs’ impressive ability to fully capture lower level visual representation of real-world objects, we show that CNNs do not fully capture higher level visual representations of real-world objects, nor those of artificial objects, either at lower or higher levels of visual representations. The latter is particularly critical, as the processing of both real-world and artificial visual stimuli engages the same neural circuits. We report similar results regardless of differences in CNN architecture, training, or the presence of recurrent processing. This indicates some fundamental differences exist in how the brain and CNNs represent visual information. |
format | Online Article Text |
id | pubmed-8024324 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-80243242021-04-21 Limits to visual representational correspondence between convolutional neural networks and the human brain Xu, Yaoda Vaziri-Pashkam, Maryam Nat Commun Article Convolutional neural networks (CNNs) are increasingly used to model human vision due to their high object categorization capabilities and general correspondence with human brain responses. Here we evaluate the performance of 14 different CNNs compared with human fMRI responses to natural and artificial images using representational similarity analysis. Despite the presence of some CNN-brain correspondence and CNNs’ impressive ability to fully capture lower level visual representation of real-world objects, we show that CNNs do not fully capture higher level visual representations of real-world objects, nor those of artificial objects, either at lower or higher levels of visual representations. The latter is particularly critical, as the processing of both real-world and artificial visual stimuli engages the same neural circuits. We report similar results regardless of differences in CNN architecture, training, or the presence of recurrent processing. This indicates some fundamental differences exist in how the brain and CNNs represent visual information. Nature Publishing Group UK 2021-04-06 /pmc/articles/PMC8024324/ /pubmed/33824315 http://dx.doi.org/10.1038/s41467-021-22244-7 Text en © The Author(s) 2021, corrected publication 2021 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Xu, Yaoda Vaziri-Pashkam, Maryam Limits to visual representational correspondence between convolutional neural networks and the human brain |
title | Limits to visual representational correspondence between convolutional neural networks and the human brain |
title_full | Limits to visual representational correspondence between convolutional neural networks and the human brain |
title_fullStr | Limits to visual representational correspondence between convolutional neural networks and the human brain |
title_full_unstemmed | Limits to visual representational correspondence between convolutional neural networks and the human brain |
title_short | Limits to visual representational correspondence between convolutional neural networks and the human brain |
title_sort | limits to visual representational correspondence between convolutional neural networks and the human brain |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8024324/ https://www.ncbi.nlm.nih.gov/pubmed/33824315 http://dx.doi.org/10.1038/s41467-021-22244-7 |
work_keys_str_mv | AT xuyaoda limitstovisualrepresentationalcorrespondencebetweenconvolutionalneuralnetworksandthehumanbrain AT vaziripashkammaryam limitstovisualrepresentationalcorrespondencebetweenconvolutionalneuralnetworksandthehumanbrain |