Cargando…

Probabilistic Dual-Space Fusion for Real-Time Human-Robot Interaction

For robots in human environments, learning complex and demanding interaction skills from humans and responding quickly to human motions are highly desirable. A common challenge for interaction tasks is that the robot has to satisfy both the task space and the joint space constraints on its motion tr...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Yihui, Wu, Jiajun, Chen, Xiaohan, Guan, Yisheng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10604510/
https://www.ncbi.nlm.nih.gov/pubmed/37887628
http://dx.doi.org/10.3390/biomimetics8060497
_version_ 1785126852438786048
author Li, Yihui
Wu, Jiajun
Chen, Xiaohan
Guan, Yisheng
author_facet Li, Yihui
Wu, Jiajun
Chen, Xiaohan
Guan, Yisheng
author_sort Li, Yihui
collection PubMed
description For robots in human environments, learning complex and demanding interaction skills from humans and responding quickly to human motions are highly desirable. A common challenge for interaction tasks is that the robot has to satisfy both the task space and the joint space constraints on its motion trajectories in real time. Few studies have addressed the issue of hyperspace constraints in human-robot interaction, whereas researchers have investigated it in robot imitation learning. In this work, we propose a method of dual-space feature fusion to enhance the accuracy of the inferred trajectories in both task space and joint space; then, we introduce a linear mapping operator (LMO) to map the inferred task space trajectory to a joint space trajectory. Finally, we combine the dual-space fusion, LMO, and phase estimation into a unified probabilistic framework. We evaluate our dual-space feature fusion capability and real-time performance in the task of a robot following a human-handheld object and a ball-hitting experiment. Our inference accuracy in both task space and joint space is superior to standard Interaction Primitives (IP) which only use single-space inference (by more than 33%); the inference accuracy of the second order LMO is comparable to the kinematic-based mapping method, and the computation time of our unified inference framework is reduced by 54.87% relative to the comparison method.
format Online
Article
Text
id pubmed-10604510
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-106045102023-10-28 Probabilistic Dual-Space Fusion for Real-Time Human-Robot Interaction Li, Yihui Wu, Jiajun Chen, Xiaohan Guan, Yisheng Biomimetics (Basel) Article For robots in human environments, learning complex and demanding interaction skills from humans and responding quickly to human motions are highly desirable. A common challenge for interaction tasks is that the robot has to satisfy both the task space and the joint space constraints on its motion trajectories in real time. Few studies have addressed the issue of hyperspace constraints in human-robot interaction, whereas researchers have investigated it in robot imitation learning. In this work, we propose a method of dual-space feature fusion to enhance the accuracy of the inferred trajectories in both task space and joint space; then, we introduce a linear mapping operator (LMO) to map the inferred task space trajectory to a joint space trajectory. Finally, we combine the dual-space fusion, LMO, and phase estimation into a unified probabilistic framework. We evaluate our dual-space feature fusion capability and real-time performance in the task of a robot following a human-handheld object and a ball-hitting experiment. Our inference accuracy in both task space and joint space is superior to standard Interaction Primitives (IP) which only use single-space inference (by more than 33%); the inference accuracy of the second order LMO is comparable to the kinematic-based mapping method, and the computation time of our unified inference framework is reduced by 54.87% relative to the comparison method. MDPI 2023-10-19 /pmc/articles/PMC10604510/ /pubmed/37887628 http://dx.doi.org/10.3390/biomimetics8060497 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Li, Yihui
Wu, Jiajun
Chen, Xiaohan
Guan, Yisheng
Probabilistic Dual-Space Fusion for Real-Time Human-Robot Interaction
title Probabilistic Dual-Space Fusion for Real-Time Human-Robot Interaction
title_full Probabilistic Dual-Space Fusion for Real-Time Human-Robot Interaction
title_fullStr Probabilistic Dual-Space Fusion for Real-Time Human-Robot Interaction
title_full_unstemmed Probabilistic Dual-Space Fusion for Real-Time Human-Robot Interaction
title_short Probabilistic Dual-Space Fusion for Real-Time Human-Robot Interaction
title_sort probabilistic dual-space fusion for real-time human-robot interaction
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10604510/
https://www.ncbi.nlm.nih.gov/pubmed/37887628
http://dx.doi.org/10.3390/biomimetics8060497
work_keys_str_mv AT liyihui probabilisticdualspacefusionforrealtimehumanrobotinteraction
AT wujiajun probabilisticdualspacefusionforrealtimehumanrobotinteraction
AT chenxiaohan probabilisticdualspacefusionforrealtimehumanrobotinteraction
AT guanyisheng probabilisticdualspacefusionforrealtimehumanrobotinteraction