Cargando…

A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences

In recent years, human–computer interaction (HCI) systems have become increasingly popular. Some of these systems demand particular approaches for discriminating actual emotions through the use of better multimodal methods. In this work, a deep canonical correlation analysis (DCCA) based multimodal...

Descripción completa

Detalles Bibliográficos
Autores principales: Muhammad, Farah, Hussain, Muhammad, Aboalsamh, Hatim
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10000366/
https://www.ncbi.nlm.nih.gov/pubmed/36900121
http://dx.doi.org/10.3390/diagnostics13050977
_version_ 1784903856808787968
author Muhammad, Farah
Hussain, Muhammad
Aboalsamh, Hatim
author_facet Muhammad, Farah
Hussain, Muhammad
Aboalsamh, Hatim
author_sort Muhammad, Farah
collection PubMed
description In recent years, human–computer interaction (HCI) systems have become increasingly popular. Some of these systems demand particular approaches for discriminating actual emotions through the use of better multimodal methods. In this work, a deep canonical correlation analysis (DCCA) based multimodal emotion recognition method is presented through the fusion of electroencephalography (EEG) and facial video clips. A two-stage framework is implemented, where the first stage extracts relevant features for emotion recognition using a single modality, while the second stage merges the highly correlated features from the two modalities and performs classification. Convolutional neural network (CNN) based Resnet50 and 1D-CNN (1-Dimensional CNN) have been utilized to extract features from facial video clips and EEG modalities, respectively. A DCCA-based approach was used to fuse highly correlated features, and three basic human emotion categories (happy, neutral, and sad) were classified using the SoftMax classifier. The proposed approach was investigated based on the publicly available datasets called MAHNOB-HCI and DEAP. Experimental results revealed an average accuracy of 93.86% and 91.54% on the MAHNOB-HCI and DEAP datasets, respectively. The competitiveness of the proposed framework and the justification for exclusivity in achieving this accuracy were evaluated by comparison with existing work.
format Online
Article
Text
id pubmed-10000366
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100003662023-03-11 A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences Muhammad, Farah Hussain, Muhammad Aboalsamh, Hatim Diagnostics (Basel) Article In recent years, human–computer interaction (HCI) systems have become increasingly popular. Some of these systems demand particular approaches for discriminating actual emotions through the use of better multimodal methods. In this work, a deep canonical correlation analysis (DCCA) based multimodal emotion recognition method is presented through the fusion of electroencephalography (EEG) and facial video clips. A two-stage framework is implemented, where the first stage extracts relevant features for emotion recognition using a single modality, while the second stage merges the highly correlated features from the two modalities and performs classification. Convolutional neural network (CNN) based Resnet50 and 1D-CNN (1-Dimensional CNN) have been utilized to extract features from facial video clips and EEG modalities, respectively. A DCCA-based approach was used to fuse highly correlated features, and three basic human emotion categories (happy, neutral, and sad) were classified using the SoftMax classifier. The proposed approach was investigated based on the publicly available datasets called MAHNOB-HCI and DEAP. Experimental results revealed an average accuracy of 93.86% and 91.54% on the MAHNOB-HCI and DEAP datasets, respectively. The competitiveness of the proposed framework and the justification for exclusivity in achieving this accuracy were evaluated by comparison with existing work. MDPI 2023-03-04 /pmc/articles/PMC10000366/ /pubmed/36900121 http://dx.doi.org/10.3390/diagnostics13050977 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Muhammad, Farah
Hussain, Muhammad
Aboalsamh, Hatim
A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences
title A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences
title_full A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences
title_fullStr A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences
title_full_unstemmed A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences
title_short A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences
title_sort bimodal emotion recognition approach through the fusion of electroencephalography and facial sequences
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10000366/
https://www.ncbi.nlm.nih.gov/pubmed/36900121
http://dx.doi.org/10.3390/diagnostics13050977
work_keys_str_mv AT muhammadfarah abimodalemotionrecognitionapproachthroughthefusionofelectroencephalographyandfacialsequences
AT hussainmuhammad abimodalemotionrecognitionapproachthroughthefusionofelectroencephalographyandfacialsequences
AT aboalsamhhatim abimodalemotionrecognitionapproachthroughthefusionofelectroencephalographyandfacialsequences
AT muhammadfarah bimodalemotionrecognitionapproachthroughthefusionofelectroencephalographyandfacialsequences
AT hussainmuhammad bimodalemotionrecognitionapproachthroughthefusionofelectroencephalographyandfacialsequences
AT aboalsamhhatim bimodalemotionrecognitionapproachthroughthefusionofelectroencephalographyandfacialsequences