Cargando…
I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation
Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous stu...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Research Foundation
2012
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3342577/ https://www.ncbi.nlm.nih.gov/pubmed/22563315 http://dx.doi.org/10.3389/fnbot.2012.00003 |
_version_ | 1782231711537430528 |
---|---|
author | Boucher, Jean-David Pattacini, Ugo Lelong, Amelie Bailly, Gerard Elisei, Frederic Fagel, Sascha Dominey, Peter Ford Ventre-Dominey, Jocelyne |
author_facet | Boucher, Jean-David Pattacini, Ugo Lelong, Amelie Bailly, Gerard Elisei, Frederic Fagel, Sascha Dominey, Peter Ford Ventre-Dominey, Jocelyne |
author_sort | Boucher, Jean-David |
collection | PubMed |
description | Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. |
format | Online Article Text |
id | pubmed-3342577 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2012 |
publisher | Frontiers Research Foundation |
record_format | MEDLINE/PubMed |
spelling | pubmed-33425772012-05-04 I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation Boucher, Jean-David Pattacini, Ugo Lelong, Amelie Bailly, Gerard Elisei, Frederic Fagel, Sascha Dominey, Peter Ford Ventre-Dominey, Jocelyne Front Neurorobot Neuroscience Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. Frontiers Research Foundation 2012-05-03 /pmc/articles/PMC3342577/ /pubmed/22563315 http://dx.doi.org/10.3389/fnbot.2012.00003 Text en Copyright © 2012 Boucher, Pattacini, Lelong, Bailly, Elisei, Fagel, Dominey and Ventre-Dominey. http://www.frontiersin.org/licenseagreement This is an open-access article distributed under the terms of the Creative Commons Attribution Non Commercial License, which permits non-commercial use, distribution, and reproduction in other forums, provided the original authors and source are credited. |
spellingShingle | Neuroscience Boucher, Jean-David Pattacini, Ugo Lelong, Amelie Bailly, Gerard Elisei, Frederic Fagel, Sascha Dominey, Peter Ford Ventre-Dominey, Jocelyne I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation |
title | I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation |
title_full | I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation |
title_fullStr | I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation |
title_full_unstemmed | I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation |
title_short | I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation |
title_sort | i reach faster when i see you look: gaze effects in human–human and human–robot face-to-face cooperation |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3342577/ https://www.ncbi.nlm.nih.gov/pubmed/22563315 http://dx.doi.org/10.3389/fnbot.2012.00003 |
work_keys_str_mv | AT boucherjeandavid ireachfasterwheniseeyoulookgazeeffectsinhumanhumanandhumanrobotfacetofacecooperation AT pattaciniugo ireachfasterwheniseeyoulookgazeeffectsinhumanhumanandhumanrobotfacetofacecooperation AT lelongamelie ireachfasterwheniseeyoulookgazeeffectsinhumanhumanandhumanrobotfacetofacecooperation AT baillygerard ireachfasterwheniseeyoulookgazeeffectsinhumanhumanandhumanrobotfacetofacecooperation AT eliseifrederic ireachfasterwheniseeyoulookgazeeffectsinhumanhumanandhumanrobotfacetofacecooperation AT fagelsascha ireachfasterwheniseeyoulookgazeeffectsinhumanhumanandhumanrobotfacetofacecooperation AT domineypeterford ireachfasterwheniseeyoulookgazeeffectsinhumanhumanandhumanrobotfacetofacecooperation AT ventredomineyjocelyne ireachfasterwheniseeyoulookgazeeffectsinhumanhumanandhumanrobotfacetofacecooperation |