Cargando…

Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these br...

Descripción completa

Detalles Bibliográficos
Autores principales: Grossberg, Stephen, Srinivasan, Karthik, Yazdanbakhsh, Arash
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4294135/
https://www.ncbi.nlm.nih.gov/pubmed/25642198
http://dx.doi.org/10.3389/fpsyg.2014.01457
_version_ 1782352685841776640
author Grossberg, Stephen
Srinivasan, Karthik
Yazdanbakhsh, Arash
author_facet Grossberg, Stephen
Srinivasan, Karthik
Yazdanbakhsh, Arash
author_sort Grossberg, Stephen
collection PubMed
description How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
format Online
Article
Text
id pubmed-4294135
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-42941352015-01-30 Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements Grossberg, Stephen Srinivasan, Karthik Yazdanbakhsh, Arash Front Psychol Psychology How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations. Frontiers Media S.A. 2015-01-14 /pmc/articles/PMC4294135/ /pubmed/25642198 http://dx.doi.org/10.3389/fpsyg.2014.01457 Text en Copyright © 2015 Grossberg, Srinivasan and Yazdanbakhsh. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Grossberg, Stephen
Srinivasan, Karthik
Yazdanbakhsh, Arash
Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements
title Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements
title_full Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements
title_fullStr Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements
title_full_unstemmed Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements
title_short Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements
title_sort binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4294135/
https://www.ncbi.nlm.nih.gov/pubmed/25642198
http://dx.doi.org/10.3389/fpsyg.2014.01457
work_keys_str_mv AT grossbergstephen binocularfusionandinvariantcategorylearningduetopredictiveremappingduringscanningofadepthfulscenewitheyemovements
AT srinivasankarthik binocularfusionandinvariantcategorylearningduetopredictiveremappingduringscanningofadepthfulscenewitheyemovements
AT yazdanbakhsharash binocularfusionandinvariantcategorylearningduetopredictiveremappingduringscanningofadepthfulscenewitheyemovements