Cargando…

Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images

Attentional selection is a function that allocates the brain’s computational resources to the most important part of a visual scene at a specific moment. Saliency map models have been proposed as computational models to predict attentional selection within a spatial location. Recent saliency map mod...

Descripción completa

Detalles Bibliográficos
Autores principales: Wagatsuma, Nobuhiko, Hidaka, Akinori, Tamura, Hiroshi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Society for Neuroscience 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7890521/
https://www.ncbi.nlm.nih.gov/pubmed/33234544
http://dx.doi.org/10.1523/ENEURO.0200-20.2020
_version_ 1783652528779427840
author Wagatsuma, Nobuhiko
Hidaka, Akinori
Tamura, Hiroshi
author_facet Wagatsuma, Nobuhiko
Hidaka, Akinori
Tamura, Hiroshi
author_sort Wagatsuma, Nobuhiko
collection PubMed
description Attentional selection is a function that allocates the brain’s computational resources to the most important part of a visual scene at a specific moment. Saliency map models have been proposed as computational models to predict attentional selection within a spatial location. Recent saliency map models based on deep convolutional neural networks (DCNNs) exhibit the highest performance for predicting the location of attentional selection and human gaze, which reflect overt attention. Trained DCNNs potentially provide insight into the perceptual mechanisms of biological visual systems. However, the relationship between artificial and neural representations used for determining attentional selection and gaze location remains unknown. To understand the mechanism underlying saliency map models based on DCNNs and the neural system of attentional selection, we investigated the correspondence between layers of a DCNN saliency map model and monkey visual areas for natural image representations. We compared the characteristics of the responses in each layer of the model with those of the neural representation in the primary visual (V1), intermediate visual (V4), and inferior temporal (IT) cortices. Regardless of the DCNN layer level, the characteristics of the responses were consistent with that of the neural representation in V1. We found marked peaks of correspondence between V1 and the early level and higher-intermediate-level layers of the model. These results provide insight into the mechanism of the trained DCNN saliency map model and suggest that the neural representations in V1 play an important role in computing the saliency that mediates attentional selection, which supports the V1 saliency hypothesis.
format Online
Article
Text
id pubmed-7890521
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Society for Neuroscience
record_format MEDLINE/PubMed
spelling pubmed-78905212021-02-18 Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images Wagatsuma, Nobuhiko Hidaka, Akinori Tamura, Hiroshi eNeuro Research Article: New Research Attentional selection is a function that allocates the brain’s computational resources to the most important part of a visual scene at a specific moment. Saliency map models have been proposed as computational models to predict attentional selection within a spatial location. Recent saliency map models based on deep convolutional neural networks (DCNNs) exhibit the highest performance for predicting the location of attentional selection and human gaze, which reflect overt attention. Trained DCNNs potentially provide insight into the perceptual mechanisms of biological visual systems. However, the relationship between artificial and neural representations used for determining attentional selection and gaze location remains unknown. To understand the mechanism underlying saliency map models based on DCNNs and the neural system of attentional selection, we investigated the correspondence between layers of a DCNN saliency map model and monkey visual areas for natural image representations. We compared the characteristics of the responses in each layer of the model with those of the neural representation in the primary visual (V1), intermediate visual (V4), and inferior temporal (IT) cortices. Regardless of the DCNN layer level, the characteristics of the responses were consistent with that of the neural representation in V1. We found marked peaks of correspondence between V1 and the early level and higher-intermediate-level layers of the model. These results provide insight into the mechanism of the trained DCNN saliency map model and suggest that the neural representations in V1 play an important role in computing the saliency that mediates attentional selection, which supports the V1 saliency hypothesis. Society for Neuroscience 2021-01-12 /pmc/articles/PMC7890521/ /pubmed/33234544 http://dx.doi.org/10.1523/ENEURO.0200-20.2020 Text en Copyright © 2021 Wagatsuma et al. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.
spellingShingle Research Article: New Research
Wagatsuma, Nobuhiko
Hidaka, Akinori
Tamura, Hiroshi
Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images
title Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images
title_full Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images
title_fullStr Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images
title_full_unstemmed Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images
title_short Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images
title_sort correspondence between monkey visual cortices and layers of a saliency map model based on a deep convolutional neural network for representations of natural images
topic Research Article: New Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7890521/
https://www.ncbi.nlm.nih.gov/pubmed/33234544
http://dx.doi.org/10.1523/ENEURO.0200-20.2020
work_keys_str_mv AT wagatsumanobuhiko correspondencebetweenmonkeyvisualcorticesandlayersofasaliencymapmodelbasedonadeepconvolutionalneuralnetworkforrepresentationsofnaturalimages
AT hidakaakinori correspondencebetweenmonkeyvisualcorticesandlayersofasaliencymapmodelbasedonadeepconvolutionalneuralnetworkforrepresentationsofnaturalimages
AT tamurahiroshi correspondencebetweenmonkeyvisualcorticesandlayersofasaliencymapmodelbasedonadeepconvolutionalneuralnetworkforrepresentationsofnaturalimages