Cargando…
Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction
This paper presents an implementation of RoSA, a Robot System Assistant, for safe and intuitive human-machine interaction. The interaction modalities were chosen and previously reviewed using a Wizard of Oz study emphasizing a strong propensity for speech and pointing gestures. Based on these findin...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8838571/ https://www.ncbi.nlm.nih.gov/pubmed/35161671 http://dx.doi.org/10.3390/s22030923 |
_version_ | 1784650160012263424 |
---|---|
author | Strazdas, Dominykas Hintz, Jan Khalifa, Aly Abdelrahman, Ahmed A. Hempel, Thorsten Al-Hamadi, Ayoub |
author_facet | Strazdas, Dominykas Hintz, Jan Khalifa, Aly Abdelrahman, Ahmed A. Hempel, Thorsten Al-Hamadi, Ayoub |
author_sort | Strazdas, Dominykas |
collection | PubMed |
description | This paper presents an implementation of RoSA, a Robot System Assistant, for safe and intuitive human-machine interaction. The interaction modalities were chosen and previously reviewed using a Wizard of Oz study emphasizing a strong propensity for speech and pointing gestures. Based on these findings, we design and implement a new multi-modal system for contactless human-machine interaction based on speech, facial, and gesture recognition. We evaluate our proposed system in an extensive study with multiple subjects to examine the user experience and interaction efficiency. It reports that our method achieves similar usability scores compared to the entirely human remote-controlled robot interaction in our Wizard of Oz study. Furthermore, our framework’s implementation is based on the Robot Operating System (ROS), allowing modularity and extendability for our multi-device and multi-user method. |
format | Online Article Text |
id | pubmed-8838571 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-88385712022-02-13 Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction Strazdas, Dominykas Hintz, Jan Khalifa, Aly Abdelrahman, Ahmed A. Hempel, Thorsten Al-Hamadi, Ayoub Sensors (Basel) Article This paper presents an implementation of RoSA, a Robot System Assistant, for safe and intuitive human-machine interaction. The interaction modalities were chosen and previously reviewed using a Wizard of Oz study emphasizing a strong propensity for speech and pointing gestures. Based on these findings, we design and implement a new multi-modal system for contactless human-machine interaction based on speech, facial, and gesture recognition. We evaluate our proposed system in an extensive study with multiple subjects to examine the user experience and interaction efficiency. It reports that our method achieves similar usability scores compared to the entirely human remote-controlled robot interaction in our Wizard of Oz study. Furthermore, our framework’s implementation is based on the Robot Operating System (ROS), allowing modularity and extendability for our multi-device and multi-user method. MDPI 2022-01-25 /pmc/articles/PMC8838571/ /pubmed/35161671 http://dx.doi.org/10.3390/s22030923 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Strazdas, Dominykas Hintz, Jan Khalifa, Aly Abdelrahman, Ahmed A. Hempel, Thorsten Al-Hamadi, Ayoub Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction |
title | Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction |
title_full | Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction |
title_fullStr | Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction |
title_full_unstemmed | Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction |
title_short | Robot System Assistant (RoSA): Towards Intuitive Multi-Modal and Multi-Device Human-Robot Interaction |
title_sort | robot system assistant (rosa): towards intuitive multi-modal and multi-device human-robot interaction |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8838571/ https://www.ncbi.nlm.nih.gov/pubmed/35161671 http://dx.doi.org/10.3390/s22030923 |
work_keys_str_mv | AT strazdasdominykas robotsystemassistantrosatowardsintuitivemultimodalandmultidevicehumanrobotinteraction AT hintzjan robotsystemassistantrosatowardsintuitivemultimodalandmultidevicehumanrobotinteraction AT khalifaaly robotsystemassistantrosatowardsintuitivemultimodalandmultidevicehumanrobotinteraction AT abdelrahmanahmeda robotsystemassistantrosatowardsintuitivemultimodalandmultidevicehumanrobotinteraction AT hempelthorsten robotsystemassistantrosatowardsintuitivemultimodalandmultidevicehumanrobotinteraction AT alhamadiayoub robotsystemassistantrosatowardsintuitivemultimodalandmultidevicehumanrobotinteraction |