Cargando…

Combined representation of visual features in the scene-selective cortex

Visual features of separable dimensions like color and shape conjoin to represent an integrated entity. We investigated how visual features bind to form a complex visual scene. Specifically, we focused on features important for visually guided navigation: direction and distance. Previously, separate...

Descripción completa

Detalles Bibliográficos
Autores principales: Kang, Jisu, Park, Soojin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cold Spring Harbor Laboratory 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10402097/
https://www.ncbi.nlm.nih.gov/pubmed/37546776
http://dx.doi.org/10.1101/2023.07.24.550280
_version_ 1785084801206714368
author Kang, Jisu
Park, Soojin
author_facet Kang, Jisu
Park, Soojin
author_sort Kang, Jisu
collection PubMed
description Visual features of separable dimensions like color and shape conjoin to represent an integrated entity. We investigated how visual features bind to form a complex visual scene. Specifically, we focused on features important for visually guided navigation: direction and distance. Previously, separate works have shown that directions and distances of navigable paths are coded in the occipital place area (OPA). Using functional magnetic resonance imaging (fMRI), we tested how separate features are concurrently represented in the OPA. Participants saw eight different types of scenes, in which four of them had one path and the other four had two paths. In single-path scenes, path direction was either to the left or to the right. In double-path scenes, both directions were present. Each path contained a glass wall located either near or far, changing the navigational distance. To test how the OPA represents paths in terms of direction and distance features, we took three approaches. First, the independent-features approach examined whether the OPA codes directions and distances independently in single-path scenes. Second, the integrated-features approach explored how directions and distances are integrated into path units, as compared to pooled features, using double-path scenes. Finally, the integrated-paths approach asked how separate paths are combined into a scene. Using multi-voxel pattern similarity analysis, we found that the OPA’s representations of single-path scenes were similar to other single-path scenes of either the same direction or the same distance. Representations of double-path scenes were similar to the combination of two constituent single-paths, as a combined unit of direction and distance rather than pooled representation of all features. These results show that the OPA combines the two features to form path units, which are then used to build multiple-path scenes. Altogether, these results suggest that visually guided navigation may be supported by the OPA that automatically and efficiently combines multiple features relevant for navigation and represent a navigation file.
format Online
Article
Text
id pubmed-10402097
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Cold Spring Harbor Laboratory
record_format MEDLINE/PubMed
spelling pubmed-104020972023-08-05 Combined representation of visual features in the scene-selective cortex Kang, Jisu Park, Soojin bioRxiv Article Visual features of separable dimensions like color and shape conjoin to represent an integrated entity. We investigated how visual features bind to form a complex visual scene. Specifically, we focused on features important for visually guided navigation: direction and distance. Previously, separate works have shown that directions and distances of navigable paths are coded in the occipital place area (OPA). Using functional magnetic resonance imaging (fMRI), we tested how separate features are concurrently represented in the OPA. Participants saw eight different types of scenes, in which four of them had one path and the other four had two paths. In single-path scenes, path direction was either to the left or to the right. In double-path scenes, both directions were present. Each path contained a glass wall located either near or far, changing the navigational distance. To test how the OPA represents paths in terms of direction and distance features, we took three approaches. First, the independent-features approach examined whether the OPA codes directions and distances independently in single-path scenes. Second, the integrated-features approach explored how directions and distances are integrated into path units, as compared to pooled features, using double-path scenes. Finally, the integrated-paths approach asked how separate paths are combined into a scene. Using multi-voxel pattern similarity analysis, we found that the OPA’s representations of single-path scenes were similar to other single-path scenes of either the same direction or the same distance. Representations of double-path scenes were similar to the combination of two constituent single-paths, as a combined unit of direction and distance rather than pooled representation of all features. These results show that the OPA combines the two features to form path units, which are then used to build multiple-path scenes. Altogether, these results suggest that visually guided navigation may be supported by the OPA that automatically and efficiently combines multiple features relevant for navigation and represent a navigation file. Cold Spring Harbor Laboratory 2023-07-26 /pmc/articles/PMC10402097/ /pubmed/37546776 http://dx.doi.org/10.1101/2023.07.24.550280 Text en https://creativecommons.org/licenses/by-nc-nd/4.0/This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (https://creativecommons.org/licenses/by-nc-nd/4.0/) , which allows reusers to copy and distribute the material in any medium or format in unadapted form only, for noncommercial purposes only, and only so long as attribution is given to the creator.
spellingShingle Article
Kang, Jisu
Park, Soojin
Combined representation of visual features in the scene-selective cortex
title Combined representation of visual features in the scene-selective cortex
title_full Combined representation of visual features in the scene-selective cortex
title_fullStr Combined representation of visual features in the scene-selective cortex
title_full_unstemmed Combined representation of visual features in the scene-selective cortex
title_short Combined representation of visual features in the scene-selective cortex
title_sort combined representation of visual features in the scene-selective cortex
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10402097/
https://www.ncbi.nlm.nih.gov/pubmed/37546776
http://dx.doi.org/10.1101/2023.07.24.550280
work_keys_str_mv AT kangjisu combinedrepresentationofvisualfeaturesinthesceneselectivecortex
AT parksoojin combinedrepresentationofvisualfeaturesinthesceneselectivecortex