Cargando…

Recent advancements in multimodal human–robot interaction

Robotics have advanced significantly over the years, and human–robot interaction (HRI) is now playing an important role in delivering the best user experience, cutting down on laborious tasks, and raising public acceptance of robots. New HRI approaches are necessary to promote the evolution of robot...

Descripción completa

Detalles Bibliográficos
Autores principales: Su, Hang, Qi, Wen, Chen, Jiahao, Yang, Chenguang, Sandoval, Juan, Laribi, Med Amine
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10210148/
https://www.ncbi.nlm.nih.gov/pubmed/37250671
http://dx.doi.org/10.3389/fnbot.2023.1084000
_version_ 1785047006053400576
author Su, Hang
Qi, Wen
Chen, Jiahao
Yang, Chenguang
Sandoval, Juan
Laribi, Med Amine
author_facet Su, Hang
Qi, Wen
Chen, Jiahao
Yang, Chenguang
Sandoval, Juan
Laribi, Med Amine
author_sort Su, Hang
collection PubMed
description Robotics have advanced significantly over the years, and human–robot interaction (HRI) is now playing an important role in delivering the best user experience, cutting down on laborious tasks, and raising public acceptance of robots. New HRI approaches are necessary to promote the evolution of robots, with a more natural and flexible interaction manner clearly the most crucial. As a newly emerging approach to HRI, multimodal HRI is a method for individuals to communicate with a robot using various modalities, including voice, image, text, eye movement, and touch, as well as bio-signals like EEG and ECG. It is a broad field closely related to cognitive science, ergonomics, multimedia technology, and virtual reality, with numerous applications springing up each year. However, little research has been done to summarize the current development and future trend of HRI. To this end, this paper systematically reviews the state of the art of multimodal HRI on its applications by summing up the latest research articles relevant to this field. Moreover, the research development in terms of the input signal and the output signal is also covered in this manuscript.
format Online
Article
Text
id pubmed-10210148
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-102101482023-05-26 Recent advancements in multimodal human–robot interaction Su, Hang Qi, Wen Chen, Jiahao Yang, Chenguang Sandoval, Juan Laribi, Med Amine Front Neurorobot Neuroscience Robotics have advanced significantly over the years, and human–robot interaction (HRI) is now playing an important role in delivering the best user experience, cutting down on laborious tasks, and raising public acceptance of robots. New HRI approaches are necessary to promote the evolution of robots, with a more natural and flexible interaction manner clearly the most crucial. As a newly emerging approach to HRI, multimodal HRI is a method for individuals to communicate with a robot using various modalities, including voice, image, text, eye movement, and touch, as well as bio-signals like EEG and ECG. It is a broad field closely related to cognitive science, ergonomics, multimedia technology, and virtual reality, with numerous applications springing up each year. However, little research has been done to summarize the current development and future trend of HRI. To this end, this paper systematically reviews the state of the art of multimodal HRI on its applications by summing up the latest research articles relevant to this field. Moreover, the research development in terms of the input signal and the output signal is also covered in this manuscript. Frontiers Media S.A. 2023-05-11 /pmc/articles/PMC10210148/ /pubmed/37250671 http://dx.doi.org/10.3389/fnbot.2023.1084000 Text en Copyright © 2023 Su, Qi, Chen, Yang, Sandoval and Laribi. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Su, Hang
Qi, Wen
Chen, Jiahao
Yang, Chenguang
Sandoval, Juan
Laribi, Med Amine
Recent advancements in multimodal human–robot interaction
title Recent advancements in multimodal human–robot interaction
title_full Recent advancements in multimodal human–robot interaction
title_fullStr Recent advancements in multimodal human–robot interaction
title_full_unstemmed Recent advancements in multimodal human–robot interaction
title_short Recent advancements in multimodal human–robot interaction
title_sort recent advancements in multimodal human–robot interaction
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10210148/
https://www.ncbi.nlm.nih.gov/pubmed/37250671
http://dx.doi.org/10.3389/fnbot.2023.1084000
work_keys_str_mv AT suhang recentadvancementsinmultimodalhumanrobotinteraction
AT qiwen recentadvancementsinmultimodalhumanrobotinteraction
AT chenjiahao recentadvancementsinmultimodalhumanrobotinteraction
AT yangchenguang recentadvancementsinmultimodalhumanrobotinteraction
AT sandovaljuan recentadvancementsinmultimodalhumanrobotinteraction
AT laribimedamine recentadvancementsinmultimodalhumanrobotinteraction