Cargando…
Multimodal Emotion Detection via Attention-Based Fusion of Extracted Facial and Speech Features
Methods for detecting emotions that employ many modalities at the same time have been found to be more accurate and resilient than those that rely on a single sense. This is due to the fact that sentiments may be conveyed in a wide range of modalities, each of which offers a different and complement...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10304130/ https://www.ncbi.nlm.nih.gov/pubmed/37420642 http://dx.doi.org/10.3390/s23125475 |
_version_ | 1785065434942275584 |
---|---|
author | Mamieva, Dilnoza Abdusalomov, Akmalbek Bobomirzaevich Kutlimuratov, Alpamis Muminov, Bahodir Whangbo, Taeg Keun |
author_facet | Mamieva, Dilnoza Abdusalomov, Akmalbek Bobomirzaevich Kutlimuratov, Alpamis Muminov, Bahodir Whangbo, Taeg Keun |
author_sort | Mamieva, Dilnoza |
collection | PubMed |
description | Methods for detecting emotions that employ many modalities at the same time have been found to be more accurate and resilient than those that rely on a single sense. This is due to the fact that sentiments may be conveyed in a wide range of modalities, each of which offers a different and complementary window into the thoughts and emotions of the speaker. In this way, a more complete picture of a person’s emotional state may emerge through the fusion and analysis of data from several modalities. The research suggests a new attention-based approach to multimodal emotion recognition. This technique integrates facial and speech features that have been extracted by independent encoders in order to pick the aspects that are the most informative. It increases the system’s accuracy by processing speech and facial features of various sizes and focuses on the most useful bits of input. A more comprehensive representation of facial expressions is extracted by the use of both low- and high-level facial features. These modalities are combined using a fusion network to create a multimodal feature vector which is then fed to a classification layer for emotion recognition. The developed system is evaluated on two datasets, IEMOCAP and CMU-MOSEI, and shows superior performance compared to existing models, achieving a weighted accuracy WA of 74.6% and an F1 score of 66.1% on the IEMOCAP dataset and a WA of 80.7% and F1 score of 73.7% on the CMU-MOSEI dataset. |
format | Online Article Text |
id | pubmed-10304130 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-103041302023-06-29 Multimodal Emotion Detection via Attention-Based Fusion of Extracted Facial and Speech Features Mamieva, Dilnoza Abdusalomov, Akmalbek Bobomirzaevich Kutlimuratov, Alpamis Muminov, Bahodir Whangbo, Taeg Keun Sensors (Basel) Article Methods for detecting emotions that employ many modalities at the same time have been found to be more accurate and resilient than those that rely on a single sense. This is due to the fact that sentiments may be conveyed in a wide range of modalities, each of which offers a different and complementary window into the thoughts and emotions of the speaker. In this way, a more complete picture of a person’s emotional state may emerge through the fusion and analysis of data from several modalities. The research suggests a new attention-based approach to multimodal emotion recognition. This technique integrates facial and speech features that have been extracted by independent encoders in order to pick the aspects that are the most informative. It increases the system’s accuracy by processing speech and facial features of various sizes and focuses on the most useful bits of input. A more comprehensive representation of facial expressions is extracted by the use of both low- and high-level facial features. These modalities are combined using a fusion network to create a multimodal feature vector which is then fed to a classification layer for emotion recognition. The developed system is evaluated on two datasets, IEMOCAP and CMU-MOSEI, and shows superior performance compared to existing models, achieving a weighted accuracy WA of 74.6% and an F1 score of 66.1% on the IEMOCAP dataset and a WA of 80.7% and F1 score of 73.7% on the CMU-MOSEI dataset. MDPI 2023-06-09 /pmc/articles/PMC10304130/ /pubmed/37420642 http://dx.doi.org/10.3390/s23125475 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Mamieva, Dilnoza Abdusalomov, Akmalbek Bobomirzaevich Kutlimuratov, Alpamis Muminov, Bahodir Whangbo, Taeg Keun Multimodal Emotion Detection via Attention-Based Fusion of Extracted Facial and Speech Features |
title | Multimodal Emotion Detection via Attention-Based Fusion of Extracted Facial and Speech Features |
title_full | Multimodal Emotion Detection via Attention-Based Fusion of Extracted Facial and Speech Features |
title_fullStr | Multimodal Emotion Detection via Attention-Based Fusion of Extracted Facial and Speech Features |
title_full_unstemmed | Multimodal Emotion Detection via Attention-Based Fusion of Extracted Facial and Speech Features |
title_short | Multimodal Emotion Detection via Attention-Based Fusion of Extracted Facial and Speech Features |
title_sort | multimodal emotion detection via attention-based fusion of extracted facial and speech features |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10304130/ https://www.ncbi.nlm.nih.gov/pubmed/37420642 http://dx.doi.org/10.3390/s23125475 |
work_keys_str_mv | AT mamievadilnoza multimodalemotiondetectionviaattentionbasedfusionofextractedfacialandspeechfeatures AT abdusalomovakmalbekbobomirzaevich multimodalemotiondetectionviaattentionbasedfusionofextractedfacialandspeechfeatures AT kutlimuratovalpamis multimodalemotiondetectionviaattentionbasedfusionofextractedfacialandspeechfeatures AT muminovbahodir multimodalemotiondetectionviaattentionbasedfusionofextractedfacialandspeechfeatures AT whangbotaegkeun multimodalemotiondetectionviaattentionbasedfusionofextractedfacialandspeechfeatures |