Cargando…
Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands
Grasp affordances in robotics represent different ways to grasp an object involving a variety of factors from vision to hand control. A model of grasp affordances that is able to scale across different objects, features and domains is needed to provide robots with advanced manipulation skills. The e...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6306220/ https://www.ncbi.nlm.nih.gov/pubmed/30586407 http://dx.doi.org/10.1371/journal.pone.0208228 |
_version_ | 1783382733138952192 |
---|---|
author | Cotugno, Giuseppe Konstantinova, Jelizaveta Althoefer, Kaspar Nanayakkara, Thrishantha |
author_facet | Cotugno, Giuseppe Konstantinova, Jelizaveta Althoefer, Kaspar Nanayakkara, Thrishantha |
author_sort | Cotugno, Giuseppe |
collection | PubMed |
description | Grasp affordances in robotics represent different ways to grasp an object involving a variety of factors from vision to hand control. A model of grasp affordances that is able to scale across different objects, features and domains is needed to provide robots with advanced manipulation skills. The existing frameworks, however, can be difficult to extend towards a more general and domain independent approach. This work is the first step towards a modular implementation of grasp affordances that can be separated into two stages: approach to grasp and grasp execution. In this study, human experiments of approaching to grasp are analysed, and object-independent patterns of motion are defined and modelled analytically from the data. Human subjects performed a specific action (hammering) using objects of different geometry, size and weight. Motion capture data relating the hand-object approach distance was used for the analysis. The results showed that approach to grasp can be structured in four distinct phases that are best represented by non-linear models, independent from the objects being handled. This suggests that approaching to grasp patterns are following an intentionally planned control strategy, rather than implementing a reactive execution. |
format | Online Article Text |
id | pubmed-6306220 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-63062202019-01-08 Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands Cotugno, Giuseppe Konstantinova, Jelizaveta Althoefer, Kaspar Nanayakkara, Thrishantha PLoS One Research Article Grasp affordances in robotics represent different ways to grasp an object involving a variety of factors from vision to hand control. A model of grasp affordances that is able to scale across different objects, features and domains is needed to provide robots with advanced manipulation skills. The existing frameworks, however, can be difficult to extend towards a more general and domain independent approach. This work is the first step towards a modular implementation of grasp affordances that can be separated into two stages: approach to grasp and grasp execution. In this study, human experiments of approaching to grasp are analysed, and object-independent patterns of motion are defined and modelled analytically from the data. Human subjects performed a specific action (hammering) using objects of different geometry, size and weight. Motion capture data relating the hand-object approach distance was used for the analysis. The results showed that approach to grasp can be structured in four distinct phases that are best represented by non-linear models, independent from the objects being handled. This suggests that approaching to grasp patterns are following an intentionally planned control strategy, rather than implementing a reactive execution. Public Library of Science 2018-12-26 /pmc/articles/PMC6306220/ /pubmed/30586407 http://dx.doi.org/10.1371/journal.pone.0208228 Text en © 2018 Cotugno et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Cotugno, Giuseppe Konstantinova, Jelizaveta Althoefer, Kaspar Nanayakkara, Thrishantha Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands |
title | Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands |
title_full | Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands |
title_fullStr | Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands |
title_full_unstemmed | Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands |
title_short | Modelling the structure of object-independent human affordances of approaching to grasp for robotic hands |
title_sort | modelling the structure of object-independent human affordances of approaching to grasp for robotic hands |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6306220/ https://www.ncbi.nlm.nih.gov/pubmed/30586407 http://dx.doi.org/10.1371/journal.pone.0208228 |
work_keys_str_mv | AT cotugnogiuseppe modellingthestructureofobjectindependenthumanaffordancesofapproachingtograspforrobotichands AT konstantinovajelizaveta modellingthestructureofobjectindependenthumanaffordancesofapproachingtograspforrobotichands AT althoeferkaspar modellingthestructureofobjectindependenthumanaffordancesofapproachingtograspforrobotichands AT nanayakkarathrishantha modellingthestructureofobjectindependenthumanaffordancesofapproachingtograspforrobotichands |