Cargando…
Active Fovea-Based Vision Through Computationally-Effective Model-Based Prediction
What motivates an action in the absence of a definite reward? Taking the case of visuomotor control, we consider a minimal control problem that is how select the next saccade, in a sequence of discrete eye movements, when the final objective is to better interpret the current visual scene. The visua...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6302111/ https://www.ncbi.nlm.nih.gov/pubmed/30618705 http://dx.doi.org/10.3389/fnbot.2018.00076 |
_version_ | 1783381920667664384 |
---|---|
author | Daucé, Emmanuel |
author_facet | Daucé, Emmanuel |
author_sort | Daucé, Emmanuel |
collection | PubMed |
description | What motivates an action in the absence of a definite reward? Taking the case of visuomotor control, we consider a minimal control problem that is how select the next saccade, in a sequence of discrete eye movements, when the final objective is to better interpret the current visual scene. The visual scene is modeled here as a partially-observed environment, with a generative model explaining how the visual data is shaped by action. This allows to interpret different action selection metrics proposed in the literature, including the Salience, the Infomax and the Variational Free Energy, under a single information theoretic construct, namely the view-based Information Gain. Pursuing this analytic track, two original action selection metrics named the Information Gain Lower Bound (IGLB) and the Information Gain Upper Bound (IGUB) are then proposed. Showing either a conservative or an optimistic bias regarding the Information Gain, they strongly simplify its calculation. An original fovea-based visual scene decoding setup is then proposed, with numerical experiments highlighting different facets of artificial fovea-based vision. A first and principal result is that state-of-the-art recognition rates are obtained with fovea-based saccadic exploration, using less than 10% of the original image's data. Those satisfactory results illustrate the advantage of mixing predictive control with accurate state-of-the-art predictors, namely a deep neural network. A second result is the sub-optimality of some classical action-selection metrics widely used in the literature, that is not manifest with finely-tuned inference models, but becomes patent when coarse or faulty models are used. Last, a computationally-effective predictive model is developed using the IGLB objective, with pre-processed visual scan-path read-out from memory, bypassing computationally-demanding predictive calculations. This last simplified setting is shown effective in our case, showing both a competing accuracy and a good robustness to model flaws. |
format | Online Article Text |
id | pubmed-6302111 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-63021112019-01-07 Active Fovea-Based Vision Through Computationally-Effective Model-Based Prediction Daucé, Emmanuel Front Neurorobot Neuroscience What motivates an action in the absence of a definite reward? Taking the case of visuomotor control, we consider a minimal control problem that is how select the next saccade, in a sequence of discrete eye movements, when the final objective is to better interpret the current visual scene. The visual scene is modeled here as a partially-observed environment, with a generative model explaining how the visual data is shaped by action. This allows to interpret different action selection metrics proposed in the literature, including the Salience, the Infomax and the Variational Free Energy, under a single information theoretic construct, namely the view-based Information Gain. Pursuing this analytic track, two original action selection metrics named the Information Gain Lower Bound (IGLB) and the Information Gain Upper Bound (IGUB) are then proposed. Showing either a conservative or an optimistic bias regarding the Information Gain, they strongly simplify its calculation. An original fovea-based visual scene decoding setup is then proposed, with numerical experiments highlighting different facets of artificial fovea-based vision. A first and principal result is that state-of-the-art recognition rates are obtained with fovea-based saccadic exploration, using less than 10% of the original image's data. Those satisfactory results illustrate the advantage of mixing predictive control with accurate state-of-the-art predictors, namely a deep neural network. A second result is the sub-optimality of some classical action-selection metrics widely used in the literature, that is not manifest with finely-tuned inference models, but becomes patent when coarse or faulty models are used. Last, a computationally-effective predictive model is developed using the IGLB objective, with pre-processed visual scan-path read-out from memory, bypassing computationally-demanding predictive calculations. This last simplified setting is shown effective in our case, showing both a competing accuracy and a good robustness to model flaws. Frontiers Media S.A. 2018-12-14 /pmc/articles/PMC6302111/ /pubmed/30618705 http://dx.doi.org/10.3389/fnbot.2018.00076 Text en Copyright © 2018 Daucé. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Daucé, Emmanuel Active Fovea-Based Vision Through Computationally-Effective Model-Based Prediction |
title | Active Fovea-Based Vision Through Computationally-Effective Model-Based Prediction |
title_full | Active Fovea-Based Vision Through Computationally-Effective Model-Based Prediction |
title_fullStr | Active Fovea-Based Vision Through Computationally-Effective Model-Based Prediction |
title_full_unstemmed | Active Fovea-Based Vision Through Computationally-Effective Model-Based Prediction |
title_short | Active Fovea-Based Vision Through Computationally-Effective Model-Based Prediction |
title_sort | active fovea-based vision through computationally-effective model-based prediction |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6302111/ https://www.ncbi.nlm.nih.gov/pubmed/30618705 http://dx.doi.org/10.3389/fnbot.2018.00076 |
work_keys_str_mv | AT dauceemmanuel activefoveabasedvisionthroughcomputationallyeffectivemodelbasedprediction |