Cargando…

Enhancing Human–Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition

With the increasing presence of robots in our daily lives, it is crucial to design interaction interfaces that are natural, easy to use and meaningful for robotic tasks. This is important not only to enhance the user experience but also to increase the task reliability by providing supplementary inf...

Descripción completa

Detalles Bibliográficos
Autores principales: Paul, Shuvo Kumar, Nicolescu, Mircea, Nicolescu, Monica
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10347030/
https://www.ncbi.nlm.nih.gov/pubmed/37447647
http://dx.doi.org/10.3390/s23135798
_version_ 1785073453006585856
author Paul, Shuvo Kumar
Nicolescu, Mircea
Nicolescu, Monica
author_facet Paul, Shuvo Kumar
Nicolescu, Mircea
Nicolescu, Monica
author_sort Paul, Shuvo Kumar
collection PubMed
description With the increasing presence of robots in our daily lives, it is crucial to design interaction interfaces that are natural, easy to use and meaningful for robotic tasks. This is important not only to enhance the user experience but also to increase the task reliability by providing supplementary information. Motivated by this, we propose a multi-modal framework consisting of multiple independent modules. These modules take advantage of multiple sensors (e.g., image, sound, depth) and can be used separately or in combination for effective human–robot collaborative interaction. We identified and implemented four key components of an effective human robot collaborative setting, which included determining object location and pose, extracting intricate information from verbal instructions, resolving user(s) of interest (UOI), and gesture recognition and gaze estimation to facilitate the natural and intuitive interactions. The system uses a feature–detector–descriptor approach for object recognition and a homography-based technique for planar pose estimation and a deep multi-task learning model to extract intricate task parameters from verbal communication. The user of interest (UOI) is detected by estimating the facing state and active speakers. The framework also includes gesture detection and gaze estimation modules, which are combined with a verbal instruction component to form structured commands for robotic entities. Experiments were conducted to assess the performance of these interaction interfaces, and the results demonstrated the effectiveness of the approach.
format Online
Article
Text
id pubmed-10347030
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-103470302023-07-15 Enhancing Human–Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition Paul, Shuvo Kumar Nicolescu, Mircea Nicolescu, Monica Sensors (Basel) Article With the increasing presence of robots in our daily lives, it is crucial to design interaction interfaces that are natural, easy to use and meaningful for robotic tasks. This is important not only to enhance the user experience but also to increase the task reliability by providing supplementary information. Motivated by this, we propose a multi-modal framework consisting of multiple independent modules. These modules take advantage of multiple sensors (e.g., image, sound, depth) and can be used separately or in combination for effective human–robot collaborative interaction. We identified and implemented four key components of an effective human robot collaborative setting, which included determining object location and pose, extracting intricate information from verbal instructions, resolving user(s) of interest (UOI), and gesture recognition and gaze estimation to facilitate the natural and intuitive interactions. The system uses a feature–detector–descriptor approach for object recognition and a homography-based technique for planar pose estimation and a deep multi-task learning model to extract intricate task parameters from verbal communication. The user of interest (UOI) is detected by estimating the facing state and active speakers. The framework also includes gesture detection and gaze estimation modules, which are combined with a verbal instruction component to form structured commands for robotic entities. Experiments were conducted to assess the performance of these interaction interfaces, and the results demonstrated the effectiveness of the approach. MDPI 2023-06-21 /pmc/articles/PMC10347030/ /pubmed/37447647 http://dx.doi.org/10.3390/s23135798 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Paul, Shuvo Kumar
Nicolescu, Mircea
Nicolescu, Monica
Enhancing Human–Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition
title Enhancing Human–Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition
title_full Enhancing Human–Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition
title_fullStr Enhancing Human–Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition
title_full_unstemmed Enhancing Human–Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition
title_short Enhancing Human–Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition
title_sort enhancing human–robot collaboration through a multi-module interaction framework with sensor fusion: object recognition, verbal communication, user of interest detection, gesture and gaze recognition
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10347030/
https://www.ncbi.nlm.nih.gov/pubmed/37447647
http://dx.doi.org/10.3390/s23135798
work_keys_str_mv AT paulshuvokumar enhancinghumanrobotcollaborationthroughamultimoduleinteractionframeworkwithsensorfusionobjectrecognitionverbalcommunicationuserofinterestdetectiongestureandgazerecognition
AT nicolescumircea enhancinghumanrobotcollaborationthroughamultimoduleinteractionframeworkwithsensorfusionobjectrecognitionverbalcommunicationuserofinterestdetectiongestureandgazerecognition
AT nicolescumonica enhancinghumanrobotcollaborationthroughamultimoduleinteractionframeworkwithsensorfusionobjectrecognitionverbalcommunicationuserofinterestdetectiongestureandgazerecognition