Deep neural networks capture texture sensitivity in V2

Deep convolutional neural networks (CNNs) trained on visual objects have shown intriguing ability to predict some response properties of visual cortical neurons. However, the factors (e.g., if the model is trained or not, receptive field size) and computations (e.g., convolution, rectification, pool...

Descripción completa

Detalles Bibliográficos
Autores principales: Laskar, Md Nasir Uddin, Sanchez Giraldo, Luis Gonzalo, Schwartz, Odelia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Association for Research in Vision and Ophthalmology 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7424103/
https://www.ncbi.nlm.nih.gov/pubmed/32692830
http://dx.doi.org/10.1167/jov.20.7.21
_version_ 1783570268805922816
author Laskar, Md Nasir Uddin
Sanchez Giraldo, Luis Gonzalo
Schwartz, Odelia
author_facet Laskar, Md Nasir Uddin
Sanchez Giraldo, Luis Gonzalo
Schwartz, Odelia
author_sort Laskar, Md Nasir Uddin
collection PubMed
description Deep convolutional neural networks (CNNs) trained on visual objects have shown intriguing ability to predict some response properties of visual cortical neurons. However, the factors (e.g., if the model is trained or not, receptive field size) and computations (e.g., convolution, rectification, pooling, normalization) that give rise to such ability, at what level, and the role of intermediate processing stages in explaining changes that develop across areas of the cortical hierarchy are poorly understood. We focused on the sensitivity to textures as a paradigmatic example, since recent neurophysiology experiments provide rich data pointing to texture sensitivity in secondary (but not primary) visual cortex (V2). We initially explored the CNN without any fitting to the neural data and found that the first two layers of the CNN showed qualitative correspondence to the first two cortical areas in terms of texture sensitivity. We therefore developed a quantitative approach to select a population of CNN model neurons that best fits the brain neural recordings. We found that the CNN could develop compatibility to secondary cortex in the second layer following rectification and that this was improved following pooling but only mildly influenced by the local normalization operation. Higher layers of the CNN could further, though modestly, improve the compatibility with the V2 data. The compatibility was reduced when incorporating random rather than learned weights. Our results show that the CNN class of model is effective for capturing changes that develop across early areas of cortex, and has the potential to help identify the computations that give rise to hierarchical processing in the brain (code is available in GitHub).
format Online
Article
Text
id pubmed-7424103
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher The Association for Research in Vision and Ophthalmology
record_format MEDLINE/PubMed
spelling pubmed-74241032020-08-26 Deep neural networks capture texture sensitivity in V2 Laskar, Md Nasir Uddin Sanchez Giraldo, Luis Gonzalo Schwartz, Odelia J Vis Article Deep convolutional neural networks (CNNs) trained on visual objects have shown intriguing ability to predict some response properties of visual cortical neurons. However, the factors (e.g., if the model is trained or not, receptive field size) and computations (e.g., convolution, rectification, pooling, normalization) that give rise to such ability, at what level, and the role of intermediate processing stages in explaining changes that develop across areas of the cortical hierarchy are poorly understood. We focused on the sensitivity to textures as a paradigmatic example, since recent neurophysiology experiments provide rich data pointing to texture sensitivity in secondary (but not primary) visual cortex (V2). We initially explored the CNN without any fitting to the neural data and found that the first two layers of the CNN showed qualitative correspondence to the first two cortical areas in terms of texture sensitivity. We therefore developed a quantitative approach to select a population of CNN model neurons that best fits the brain neural recordings. We found that the CNN could develop compatibility to secondary cortex in the second layer following rectification and that this was improved following pooling but only mildly influenced by the local normalization operation. Higher layers of the CNN could further, though modestly, improve the compatibility with the V2 data. The compatibility was reduced when incorporating random rather than learned weights. Our results show that the CNN class of model is effective for capturing changes that develop across early areas of cortex, and has the potential to help identify the computations that give rise to hierarchical processing in the brain (code is available in GitHub). The Association for Research in Vision and Ophthalmology 2020-07-21 /pmc/articles/PMC7424103/ /pubmed/32692830 http://dx.doi.org/10.1167/jov.20.7.21 Text en Copyright 2020 The Authors https://creativecommons.org/licenses/by/4.0/ This work is licensed under a Creative Commons Attribution 4.0 International License.
spellingShingle Article
Laskar, Md Nasir Uddin
Sanchez Giraldo, Luis Gonzalo
Schwartz, Odelia
Deep neural networks capture texture sensitivity in V2
title Deep neural networks capture texture sensitivity in V2
title_full Deep neural networks capture texture sensitivity in V2
title_fullStr Deep neural networks capture texture sensitivity in V2
title_full_unstemmed Deep neural networks capture texture sensitivity in V2
title_short Deep neural networks capture texture sensitivity in V2
title_sort deep neural networks capture texture sensitivity in v2
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7424103/
https://www.ncbi.nlm.nih.gov/pubmed/32692830
http://dx.doi.org/10.1167/jov.20.7.21
work_keys_str_mv AT laskarmdnasiruddin deepneuralnetworkscapturetexturesensitivityinv2
AT sanchezgiraldoluisgonzalo deepneuralnetworkscapturetexturesensitivityinv2
AT schwartzodelia deepneuralnetworkscapturetexturesensitivityinv2