Cargando…

A Multimodal Emotion Detection System during Human-Robot Interaction

In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emo...

Descripción completa

Detalles Bibliográficos
Autores principales: Alonso-Martín, Fernando, Malfaz, María, Sequeira, João, Gorostiza, Javier F., Salichs, Miguel A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Molecular Diversity Preservation International (MDPI) 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3871074/
https://www.ncbi.nlm.nih.gov/pubmed/24240598
http://dx.doi.org/10.3390/s131115549
_version_ 1782296772613242880
author Alonso-Martín, Fernando
Malfaz, María
Sequeira, João
Gorostiza, Javier F.
Salichs, Miguel A.
author_facet Alonso-Martín, Fernando
Malfaz, María
Sequeira, João
Gorostiza, Javier F.
Salichs, Miguel A.
author_sort Alonso-Martín, Fernando
collection PubMed
description In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.
format Online
Article
Text
id pubmed-3871074
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher Molecular Diversity Preservation International (MDPI)
record_format MEDLINE/PubMed
spelling pubmed-38710742013-12-26 A Multimodal Emotion Detection System during Human-Robot Interaction Alonso-Martín, Fernando Malfaz, María Sequeira, João Gorostiza, Javier F. Salichs, Miguel A. Sensors (Basel) Article In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. Molecular Diversity Preservation International (MDPI) 2013-11-14 /pmc/articles/PMC3871074/ /pubmed/24240598 http://dx.doi.org/10.3390/s131115549 Text en © 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
spellingShingle Article
Alonso-Martín, Fernando
Malfaz, María
Sequeira, João
Gorostiza, Javier F.
Salichs, Miguel A.
A Multimodal Emotion Detection System during Human-Robot Interaction
title A Multimodal Emotion Detection System during Human-Robot Interaction
title_full A Multimodal Emotion Detection System during Human-Robot Interaction
title_fullStr A Multimodal Emotion Detection System during Human-Robot Interaction
title_full_unstemmed A Multimodal Emotion Detection System during Human-Robot Interaction
title_short A Multimodal Emotion Detection System during Human-Robot Interaction
title_sort multimodal emotion detection system during human-robot interaction
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3871074/
https://www.ncbi.nlm.nih.gov/pubmed/24240598
http://dx.doi.org/10.3390/s131115549
work_keys_str_mv AT alonsomartinfernando amultimodalemotiondetectionsystemduringhumanrobotinteraction
AT malfazmaria amultimodalemotiondetectionsystemduringhumanrobotinteraction
AT sequeirajoao amultimodalemotiondetectionsystemduringhumanrobotinteraction
AT gorostizajavierf amultimodalemotiondetectionsystemduringhumanrobotinteraction
AT salichsmiguela amultimodalemotiondetectionsystemduringhumanrobotinteraction
AT alonsomartinfernando multimodalemotiondetectionsystemduringhumanrobotinteraction
AT malfazmaria multimodalemotiondetectionsystemduringhumanrobotinteraction
AT sequeirajoao multimodalemotiondetectionsystemduringhumanrobotinteraction
AT gorostizajavierf multimodalemotiondetectionsystemduringhumanrobotinteraction
AT salichsmiguela multimodalemotiondetectionsystemduringhumanrobotinteraction