Cargando…

Position Information Encoded by Population Activity in Hierarchical Visual Areas

Neurons in high-level visual areas respond to more complex visual features with broader receptive fields (RFs) compared to those in low-level visual areas. Thus, high-level visual areas are generally considered to carry less information regarding the position of seen objects in the visual field. How...

Descripción completa

Detalles Bibliográficos
Autores principales: Majima, Kei, Sukhanov, Paul, Horikawa, Tomoyasu, Kamitani, Yukiyasu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Society for Neuroscience 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5394939/
https://www.ncbi.nlm.nih.gov/pubmed/28451634
http://dx.doi.org/10.1523/ENEURO.0268-16.2017
_version_ 1783229796643241984
author Majima, Kei
Sukhanov, Paul
Horikawa, Tomoyasu
Kamitani, Yukiyasu
author_facet Majima, Kei
Sukhanov, Paul
Horikawa, Tomoyasu
Kamitani, Yukiyasu
author_sort Majima, Kei
collection PubMed
description Neurons in high-level visual areas respond to more complex visual features with broader receptive fields (RFs) compared to those in low-level visual areas. Thus, high-level visual areas are generally considered to carry less information regarding the position of seen objects in the visual field. However, larger RFs may not imply loss of position information at the population level. Here, we evaluated how accurately the position of a seen object could be predicted (decoded) from activity patterns in each of six representative visual areas with different RF sizes [V1–V4, lateral occipital complex (LOC), and fusiform face area (FFA)]. We collected functional magnetic resonance imaging (fMRI) responses while human subjects viewed a ball randomly moving in a two-dimensional field. To estimate population RF sizes of individual fMRI voxels, RF models were fitted for individual voxels in each brain area. The voxels in higher visual areas showed larger estimated RFs than those in lower visual areas. Then, the ball’s position in a separate session was predicted by maximum likelihood estimation using the RF models of individual voxels. We also tested a model-free multivoxel regression (support vector regression, SVR) to predict the position. We found that regardless of the difference in RF size, all visual areas showed similar prediction accuracies, especially on the horizontal dimension. Higher areas showed slightly lower accuracies on the vertical dimension, which appears to be attributed to the narrower spatial distributions of the RF centers. The results suggest that much position information is preserved in population activity through the hierarchical visual pathway regardless of RF sizes and is potentially available in later processing for recognition and behavior.
format Online
Article
Text
id pubmed-5394939
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Society for Neuroscience
record_format MEDLINE/PubMed
spelling pubmed-53949392017-04-27 Position Information Encoded by Population Activity in Hierarchical Visual Areas Majima, Kei Sukhanov, Paul Horikawa, Tomoyasu Kamitani, Yukiyasu eNeuro New Research Neurons in high-level visual areas respond to more complex visual features with broader receptive fields (RFs) compared to those in low-level visual areas. Thus, high-level visual areas are generally considered to carry less information regarding the position of seen objects in the visual field. However, larger RFs may not imply loss of position information at the population level. Here, we evaluated how accurately the position of a seen object could be predicted (decoded) from activity patterns in each of six representative visual areas with different RF sizes [V1–V4, lateral occipital complex (LOC), and fusiform face area (FFA)]. We collected functional magnetic resonance imaging (fMRI) responses while human subjects viewed a ball randomly moving in a two-dimensional field. To estimate population RF sizes of individual fMRI voxels, RF models were fitted for individual voxels in each brain area. The voxels in higher visual areas showed larger estimated RFs than those in lower visual areas. Then, the ball’s position in a separate session was predicted by maximum likelihood estimation using the RF models of individual voxels. We also tested a model-free multivoxel regression (support vector regression, SVR) to predict the position. We found that regardless of the difference in RF size, all visual areas showed similar prediction accuracies, especially on the horizontal dimension. Higher areas showed slightly lower accuracies on the vertical dimension, which appears to be attributed to the narrower spatial distributions of the RF centers. The results suggest that much position information is preserved in population activity through the hierarchical visual pathway regardless of RF sizes and is potentially available in later processing for recognition and behavior. Society for Neuroscience 2017-04-04 /pmc/articles/PMC5394939/ /pubmed/28451634 http://dx.doi.org/10.1523/ENEURO.0268-16.2017 Text en Copyright © 2017 Majima et al. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.
spellingShingle New Research
Majima, Kei
Sukhanov, Paul
Horikawa, Tomoyasu
Kamitani, Yukiyasu
Position Information Encoded by Population Activity in Hierarchical Visual Areas
title Position Information Encoded by Population Activity in Hierarchical Visual Areas
title_full Position Information Encoded by Population Activity in Hierarchical Visual Areas
title_fullStr Position Information Encoded by Population Activity in Hierarchical Visual Areas
title_full_unstemmed Position Information Encoded by Population Activity in Hierarchical Visual Areas
title_short Position Information Encoded by Population Activity in Hierarchical Visual Areas
title_sort position information encoded by population activity in hierarchical visual areas
topic New Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5394939/
https://www.ncbi.nlm.nih.gov/pubmed/28451634
http://dx.doi.org/10.1523/ENEURO.0268-16.2017
work_keys_str_mv AT majimakei positioninformationencodedbypopulationactivityinhierarchicalvisualareas
AT sukhanovpaul positioninformationencodedbypopulationactivityinhierarchicalvisualareas
AT horikawatomoyasu positioninformationencodedbypopulationactivityinhierarchicalvisualareas
AT kamitaniyukiyasu positioninformationencodedbypopulationactivityinhierarchicalvisualareas