Cargando…

Joint Learning of Binocularly Driven Saccades and Vergence by Active Efficient Coding

This paper investigates two types of eye movements: vergence and saccades. Vergence eye movements are responsible for bringing the images of the two eyes into correspondence, whereas saccades drive gaze to interesting regions in the scene. Control of both vergence and saccades develops during early...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhu, Qingpeng, Triesch, Jochen, Shi, Bertram E.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5675843/
https://www.ncbi.nlm.nih.gov/pubmed/29163121
http://dx.doi.org/10.3389/fnbot.2017.00058
_version_ 1783276969083797504
author Zhu, Qingpeng
Triesch, Jochen
Shi, Bertram E.
author_facet Zhu, Qingpeng
Triesch, Jochen
Shi, Bertram E.
author_sort Zhu, Qingpeng
collection PubMed
description This paper investigates two types of eye movements: vergence and saccades. Vergence eye movements are responsible for bringing the images of the two eyes into correspondence, whereas saccades drive gaze to interesting regions in the scene. Control of both vergence and saccades develops during early infancy. To date, these two types of eye movements have been studied separately. Here, we propose a computational model of an active vision system that integrates these two types of eye movements. We hypothesize that incorporating a saccade strategy driven by bottom-up attention will benefit the development of vergence control. The integrated system is based on the active efficient coding framework, which describes the joint development of sensory-processing and eye movement control to jointly optimize the coding efficiency of the sensory system. In the integrated system, we propose a binocular saliency model to drive saccades based on learned binocular feature extractors, which simultaneously encode both depth and texture information. Saliency in our model also depends on the current fixation point. This extends prior work, which focused on monocular images and saliency measures that are independent of the current fixation. Our results show that the proposed saliency-driven saccades lead to better vergence performance and faster learning in the overall system than random saccades. Faster learning is significant because it indicates that the system actively selects inputs for the most effective learning. This work suggests that saliency-driven saccades provide a scaffold for the development of vergence control during infancy.
format Online
Article
Text
id pubmed-5675843
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-56758432017-11-21 Joint Learning of Binocularly Driven Saccades and Vergence by Active Efficient Coding Zhu, Qingpeng Triesch, Jochen Shi, Bertram E. Front Neurorobot Neuroscience This paper investigates two types of eye movements: vergence and saccades. Vergence eye movements are responsible for bringing the images of the two eyes into correspondence, whereas saccades drive gaze to interesting regions in the scene. Control of both vergence and saccades develops during early infancy. To date, these two types of eye movements have been studied separately. Here, we propose a computational model of an active vision system that integrates these two types of eye movements. We hypothesize that incorporating a saccade strategy driven by bottom-up attention will benefit the development of vergence control. The integrated system is based on the active efficient coding framework, which describes the joint development of sensory-processing and eye movement control to jointly optimize the coding efficiency of the sensory system. In the integrated system, we propose a binocular saliency model to drive saccades based on learned binocular feature extractors, which simultaneously encode both depth and texture information. Saliency in our model also depends on the current fixation point. This extends prior work, which focused on monocular images and saliency measures that are independent of the current fixation. Our results show that the proposed saliency-driven saccades lead to better vergence performance and faster learning in the overall system than random saccades. Faster learning is significant because it indicates that the system actively selects inputs for the most effective learning. This work suggests that saliency-driven saccades provide a scaffold for the development of vergence control during infancy. Frontiers Media S.A. 2017-11-03 /pmc/articles/PMC5675843/ /pubmed/29163121 http://dx.doi.org/10.3389/fnbot.2017.00058 Text en Copyright © 2017 Zhu, Triesch and Shi. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Zhu, Qingpeng
Triesch, Jochen
Shi, Bertram E.
Joint Learning of Binocularly Driven Saccades and Vergence by Active Efficient Coding
title Joint Learning of Binocularly Driven Saccades and Vergence by Active Efficient Coding
title_full Joint Learning of Binocularly Driven Saccades and Vergence by Active Efficient Coding
title_fullStr Joint Learning of Binocularly Driven Saccades and Vergence by Active Efficient Coding
title_full_unstemmed Joint Learning of Binocularly Driven Saccades and Vergence by Active Efficient Coding
title_short Joint Learning of Binocularly Driven Saccades and Vergence by Active Efficient Coding
title_sort joint learning of binocularly driven saccades and vergence by active efficient coding
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5675843/
https://www.ncbi.nlm.nih.gov/pubmed/29163121
http://dx.doi.org/10.3389/fnbot.2017.00058
work_keys_str_mv AT zhuqingpeng jointlearningofbinocularlydrivensaccadesandvergencebyactiveefficientcoding
AT trieschjochen jointlearningofbinocularlydrivensaccadesandvergencebyactiveefficientcoding
AT shibertrame jointlearningofbinocularlydrivensaccadesandvergencebyactiveefficientcoding