Cargando…

Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model

The activity of certain parietal neurons has been interpreted as encoding affordances (directly perceivable opportunities) for grasping. Separate computational models have been developed for infant grasp learning and affordance learning, but no single model has yet combined these processes in a neur...

Descripción completa

Detalles Bibliográficos
Autores principales: Bonaiuto, James, Arbib, Michael A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4656720/
https://www.ncbi.nlm.nih.gov/pubmed/26585965
http://dx.doi.org/10.1007/s00422-015-0666-2
_version_ 1782402276580655104
author Bonaiuto, James
Arbib, Michael A.
author_facet Bonaiuto, James
Arbib, Michael A.
author_sort Bonaiuto, James
collection PubMed
description The activity of certain parietal neurons has been interpreted as encoding affordances (directly perceivable opportunities) for grasping. Separate computational models have been developed for infant grasp learning and affordance learning, but no single model has yet combined these processes in a neurobiologically plausible way. We present the Integrated Learning of Grasps and Affordances (ILGA) model that simultaneously learns grasp affordances from visual object features and motor parameters for planning grasps using trial-and-error reinforcement learning. As in the Infant Learning to Grasp Model, we model a stage of infant development prior to the onset of sophisticated visual processing of hand–object relations, but we assume that certain premotor neurons activate neural populations in primary motor cortex that synergistically control different combinations of fingers. The ILGA model is able to extract affordance representations from visual object features, learn motor parameters for generating stable grasps, and generalize its learned representations to novel objects. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s00422-015-0666-2) contains supplementary material, which is available to authorized users.
format Online
Article
Text
id pubmed-4656720
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-46567202015-12-01 Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model Bonaiuto, James Arbib, Michael A. Biol Cybern Original Article The activity of certain parietal neurons has been interpreted as encoding affordances (directly perceivable opportunities) for grasping. Separate computational models have been developed for infant grasp learning and affordance learning, but no single model has yet combined these processes in a neurobiologically plausible way. We present the Integrated Learning of Grasps and Affordances (ILGA) model that simultaneously learns grasp affordances from visual object features and motor parameters for planning grasps using trial-and-error reinforcement learning. As in the Infant Learning to Grasp Model, we model a stage of infant development prior to the onset of sophisticated visual processing of hand–object relations, but we assume that certain premotor neurons activate neural populations in primary motor cortex that synergistically control different combinations of fingers. The ILGA model is able to extract affordance representations from visual object features, learn motor parameters for generating stable grasps, and generalize its learned representations to novel objects. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s00422-015-0666-2) contains supplementary material, which is available to authorized users. Springer Berlin Heidelberg 2015-11-19 2015 /pmc/articles/PMC4656720/ /pubmed/26585965 http://dx.doi.org/10.1007/s00422-015-0666-2 Text en © The Author(s) 2015 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Original Article
Bonaiuto, James
Arbib, Michael A.
Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model
title Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model
title_full Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model
title_fullStr Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model
title_full_unstemmed Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model
title_short Learning to grasp and extract affordances: the Integrated Learning of Grasps and Affordances (ILGA) model
title_sort learning to grasp and extract affordances: the integrated learning of grasps and affordances (ilga) model
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4656720/
https://www.ncbi.nlm.nih.gov/pubmed/26585965
http://dx.doi.org/10.1007/s00422-015-0666-2
work_keys_str_mv AT bonaiutojames learningtograspandextractaffordancestheintegratedlearningofgraspsandaffordancesilgamodel
AT arbibmichaela learningtograspandextractaffordancestheintegratedlearningofgraspsandaffordancesilgamodel