Cargando…
Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts
Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We cr...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Oxford University Press
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9334293/ https://www.ncbi.nlm.nih.gov/pubmed/35909704 http://dx.doi.org/10.1093/texcom/tgac026 |
_version_ | 1784759072124305408 |
---|---|
author | Abedi Khoozani, Parisa Bharmauria, Vishal Schütz, Adrian Wildes, Richard P Crawford, J Douglas |
author_facet | Abedi Khoozani, Parisa Bharmauria, Vishal Schütz, Adrian Wildes, Richard P Crawford, J Douglas |
author_sort | Abedi Khoozani, Parisa |
collection | PubMed |
description | Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a convolutional neural network (CNN) of the visual system with a multilayer perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings and actual monkey data where the landmark shift had a partial influence (R(2) = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric–egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors. |
format | Online Article Text |
id | pubmed-9334293 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Oxford University Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-93342932022-07-29 Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts Abedi Khoozani, Parisa Bharmauria, Vishal Schütz, Adrian Wildes, Richard P Crawford, J Douglas Cereb Cortex Commun Original Article Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a convolutional neural network (CNN) of the visual system with a multilayer perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings and actual monkey data where the landmark shift had a partial influence (R(2) = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric–egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors. Oxford University Press 2022-07-08 /pmc/articles/PMC9334293/ /pubmed/35909704 http://dx.doi.org/10.1093/texcom/tgac026 Text en © The Author(s) 2022. Published by Oxford University Press. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Original Article Abedi Khoozani, Parisa Bharmauria, Vishal Schütz, Adrian Wildes, Richard P Crawford, J Douglas Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts |
title | Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts |
title_full | Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts |
title_fullStr | Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts |
title_full_unstemmed | Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts |
title_short | Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts |
title_sort | integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9334293/ https://www.ncbi.nlm.nih.gov/pubmed/35909704 http://dx.doi.org/10.1093/texcom/tgac026 |
work_keys_str_mv | AT abedikhoozaniparisa integrationofallocentricandegocentricvisualinformationinaconvolutionalmultilayerperceptronnetworkmodelofgoaldirectedgazeshifts AT bharmauriavishal integrationofallocentricandegocentricvisualinformationinaconvolutionalmultilayerperceptronnetworkmodelofgoaldirectedgazeshifts AT schutzadrian integrationofallocentricandegocentricvisualinformationinaconvolutionalmultilayerperceptronnetworkmodelofgoaldirectedgazeshifts AT wildesrichardp integrationofallocentricandegocentricvisualinformationinaconvolutionalmultilayerperceptronnetworkmodelofgoaldirectedgazeshifts AT crawfordjdouglas integrationofallocentricandegocentricvisualinformationinaconvolutionalmultilayerperceptronnetworkmodelofgoaldirectedgazeshifts |