Cargando…
A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions
Multimodal emotion recognition has gained much traction in the field of affective computing, human–computer interaction (HCI), artificial intelligence (AI), and user experience (UX). There is growing demand to automate analysis of user emotion towards HCI, AI, and UX evaluation applications for prov...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181635/ https://www.ncbi.nlm.nih.gov/pubmed/37177574 http://dx.doi.org/10.3390/s23094373 |
_version_ | 1785041621479325696 |
---|---|
author | Razzaq, Muhammad Asif Hussain, Jamil Bang, Jaehun Hua, Cam-Hao Satti, Fahad Ahmed Rehman, Ubaid Ur Bilal, Hafiz Syed Muhammad Kim, Seong Tae Lee, Sungyoung |
author_facet | Razzaq, Muhammad Asif Hussain, Jamil Bang, Jaehun Hua, Cam-Hao Satti, Fahad Ahmed Rehman, Ubaid Ur Bilal, Hafiz Syed Muhammad Kim, Seong Tae Lee, Sungyoung |
author_sort | Razzaq, Muhammad Asif |
collection | PubMed |
description | Multimodal emotion recognition has gained much traction in the field of affective computing, human–computer interaction (HCI), artificial intelligence (AI), and user experience (UX). There is growing demand to automate analysis of user emotion towards HCI, AI, and UX evaluation applications for providing affective services. Emotions are increasingly being used, obtained through the videos, audio, text or physiological signals. This has led to process emotions from multiple modalities, usually combined through ensemble-based systems with static weights. Due to numerous limitations like missing modality data, inter-class variations, and intra-class similarities, an effective weighting scheme is thus required to improve the aforementioned discrimination between modalities. This article takes into account the importance of difference between multiple modalities and assigns dynamic weights to them by adapting a more efficient combination process with the application of generalized mixture (GM) functions. Therefore, we present a hybrid multimodal emotion recognition (H-MMER) framework using multi-view learning approach for unimodal emotion recognition and introducing multimodal feature fusion level, and decision level fusion using GM functions. In an experimental study, we evaluated the ability of our proposed framework to model a set of four different emotional states (Happiness, Neutral, Sadness, and Anger) and found that most of them can be modeled well with significantly high accuracy using GM functions. The experiment shows that the proposed framework can model emotional states with an average accuracy of 98.19% and indicates significant gain in terms of performance in contrast to traditional approaches. The overall evaluation results indicate that we can identify emotional states with high accuracy and increase the robustness of an emotion classification system required for UX measurement. |
format | Online Article Text |
id | pubmed-10181635 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-101816352023-05-13 A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions Razzaq, Muhammad Asif Hussain, Jamil Bang, Jaehun Hua, Cam-Hao Satti, Fahad Ahmed Rehman, Ubaid Ur Bilal, Hafiz Syed Muhammad Kim, Seong Tae Lee, Sungyoung Sensors (Basel) Article Multimodal emotion recognition has gained much traction in the field of affective computing, human–computer interaction (HCI), artificial intelligence (AI), and user experience (UX). There is growing demand to automate analysis of user emotion towards HCI, AI, and UX evaluation applications for providing affective services. Emotions are increasingly being used, obtained through the videos, audio, text or physiological signals. This has led to process emotions from multiple modalities, usually combined through ensemble-based systems with static weights. Due to numerous limitations like missing modality data, inter-class variations, and intra-class similarities, an effective weighting scheme is thus required to improve the aforementioned discrimination between modalities. This article takes into account the importance of difference between multiple modalities and assigns dynamic weights to them by adapting a more efficient combination process with the application of generalized mixture (GM) functions. Therefore, we present a hybrid multimodal emotion recognition (H-MMER) framework using multi-view learning approach for unimodal emotion recognition and introducing multimodal feature fusion level, and decision level fusion using GM functions. In an experimental study, we evaluated the ability of our proposed framework to model a set of four different emotional states (Happiness, Neutral, Sadness, and Anger) and found that most of them can be modeled well with significantly high accuracy using GM functions. The experiment shows that the proposed framework can model emotional states with an average accuracy of 98.19% and indicates significant gain in terms of performance in contrast to traditional approaches. The overall evaluation results indicate that we can identify emotional states with high accuracy and increase the robustness of an emotion classification system required for UX measurement. MDPI 2023-04-28 /pmc/articles/PMC10181635/ /pubmed/37177574 http://dx.doi.org/10.3390/s23094373 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Razzaq, Muhammad Asif Hussain, Jamil Bang, Jaehun Hua, Cam-Hao Satti, Fahad Ahmed Rehman, Ubaid Ur Bilal, Hafiz Syed Muhammad Kim, Seong Tae Lee, Sungyoung A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions |
title | A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions |
title_full | A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions |
title_fullStr | A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions |
title_full_unstemmed | A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions |
title_short | A Hybrid Multimodal Emotion Recognition Framework for UX Evaluation Using Generalized Mixture Functions |
title_sort | hybrid multimodal emotion recognition framework for ux evaluation using generalized mixture functions |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181635/ https://www.ncbi.nlm.nih.gov/pubmed/37177574 http://dx.doi.org/10.3390/s23094373 |
work_keys_str_mv | AT razzaqmuhammadasif ahybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT hussainjamil ahybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT bangjaehun ahybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT huacamhao ahybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT sattifahadahmed ahybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT rehmanubaidur ahybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT bilalhafizsyedmuhammad ahybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT kimseongtae ahybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT leesungyoung ahybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT razzaqmuhammadasif hybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT hussainjamil hybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT bangjaehun hybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT huacamhao hybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT sattifahadahmed hybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT rehmanubaidur hybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT bilalhafizsyedmuhammad hybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT kimseongtae hybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions AT leesungyoung hybridmultimodalemotionrecognitionframeworkforuxevaluationusinggeneralizedmixturefunctions |