Cargando…

Improving Haptic Response for Contextual Human Robot Interaction

For haptic interaction, a user in a virtual environment needs to interact with proxies attached to a robot. The device must be at the exact location defined in the virtual environment in time. However, due to device limitations, delays are always unavoidable. One of the solutions to improve the devi...

Descripción completa

Detalles Bibliográficos
Autores principales: Mugisha, Stanley, Guda, Vamsi Krisha, Chevallereau, Christine, Zoppi, Matteo, Molfino, Rezia, Chablat, Damien
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8914947/
https://www.ncbi.nlm.nih.gov/pubmed/35271188
http://dx.doi.org/10.3390/s22052040
_version_ 1784667883170693120
author Mugisha, Stanley
Guda, Vamsi Krisha
Chevallereau, Christine
Zoppi, Matteo
Molfino, Rezia
Chablat, Damien
author_facet Mugisha, Stanley
Guda, Vamsi Krisha
Chevallereau, Christine
Zoppi, Matteo
Molfino, Rezia
Chablat, Damien
author_sort Mugisha, Stanley
collection PubMed
description For haptic interaction, a user in a virtual environment needs to interact with proxies attached to a robot. The device must be at the exact location defined in the virtual environment in time. However, due to device limitations, delays are always unavoidable. One of the solutions to improve the device response is to infer human intended motion and move the robot at the earliest time possible to the desired goal. This paper presents an experimental study to improve the prediction time and reduce the robot time taken to reach the desired position. We developed motion strategies based on the hand motion and eye-gaze direction to determine the point of user interaction in a virtual environment. To assess the performance of the strategies, we conducted a subject-based experiment using an exergame for reach and grab tasks designed for upper limb rehabilitation training. The experimental results in this study revealed that eye-gaze-based prediction significantly improved the detection time by 37% and the robot time taken to reach the target by 27%. Further analysis provided more insight on the effect of the eye-gaze window and the hand threshold on the device response for the experimental task.
format Online
Article
Text
id pubmed-8914947
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-89149472022-03-12 Improving Haptic Response for Contextual Human Robot Interaction Mugisha, Stanley Guda, Vamsi Krisha Chevallereau, Christine Zoppi, Matteo Molfino, Rezia Chablat, Damien Sensors (Basel) Article For haptic interaction, a user in a virtual environment needs to interact with proxies attached to a robot. The device must be at the exact location defined in the virtual environment in time. However, due to device limitations, delays are always unavoidable. One of the solutions to improve the device response is to infer human intended motion and move the robot at the earliest time possible to the desired goal. This paper presents an experimental study to improve the prediction time and reduce the robot time taken to reach the desired position. We developed motion strategies based on the hand motion and eye-gaze direction to determine the point of user interaction in a virtual environment. To assess the performance of the strategies, we conducted a subject-based experiment using an exergame for reach and grab tasks designed for upper limb rehabilitation training. The experimental results in this study revealed that eye-gaze-based prediction significantly improved the detection time by 37% and the robot time taken to reach the target by 27%. Further analysis provided more insight on the effect of the eye-gaze window and the hand threshold on the device response for the experimental task. MDPI 2022-03-05 /pmc/articles/PMC8914947/ /pubmed/35271188 http://dx.doi.org/10.3390/s22052040 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Mugisha, Stanley
Guda, Vamsi Krisha
Chevallereau, Christine
Zoppi, Matteo
Molfino, Rezia
Chablat, Damien
Improving Haptic Response for Contextual Human Robot Interaction
title Improving Haptic Response for Contextual Human Robot Interaction
title_full Improving Haptic Response for Contextual Human Robot Interaction
title_fullStr Improving Haptic Response for Contextual Human Robot Interaction
title_full_unstemmed Improving Haptic Response for Contextual Human Robot Interaction
title_short Improving Haptic Response for Contextual Human Robot Interaction
title_sort improving haptic response for contextual human robot interaction
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8914947/
https://www.ncbi.nlm.nih.gov/pubmed/35271188
http://dx.doi.org/10.3390/s22052040
work_keys_str_mv AT mugishastanley improvinghapticresponseforcontextualhumanrobotinteraction
AT gudavamsikrisha improvinghapticresponseforcontextualhumanrobotinteraction
AT chevallereauchristine improvinghapticresponseforcontextualhumanrobotinteraction
AT zoppimatteo improvinghapticresponseforcontextualhumanrobotinteraction
AT molfinorezia improvinghapticresponseforcontextualhumanrobotinteraction
AT chablatdamien improvinghapticresponseforcontextualhumanrobotinteraction