Cargando…

Expression-EEG Bimodal Fusion Emotion Recognition Method Based on Deep Learning

As one of the key issues in the field of emotional computing, emotion recognition has rich application scenarios and important research value. However, the single biometric recognition in the actual scene has the problem of low accuracy of emotion recognition classification due to its own limitation...

Descripción completa

Detalles Bibliográficos
Autores principales: Lu, Yu, Zhang, Hua, Shi, Lei, Yang, Fei, Li, Jing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8172283/
https://www.ncbi.nlm.nih.gov/pubmed/34122621
http://dx.doi.org/10.1155/2021/9940148
_version_ 1783702510179975168
author Lu, Yu
Zhang, Hua
Shi, Lei
Yang, Fei
Li, Jing
author_facet Lu, Yu
Zhang, Hua
Shi, Lei
Yang, Fei
Li, Jing
author_sort Lu, Yu
collection PubMed
description As one of the key issues in the field of emotional computing, emotion recognition has rich application scenarios and important research value. However, the single biometric recognition in the actual scene has the problem of low accuracy of emotion recognition classification due to its own limitations. In response to this problem, this paper combines deep neural networks to propose a deep learning-based expression-EEG bimodal fusion emotion recognition method. This method is based on the improved VGG-FACE network model to realize the rapid extraction of facial expression features and shorten the training time of the network model. The wavelet soft threshold algorithm is used to remove artifacts from EEG signals to extract high-quality EEG signal features. Then, based on the long- and short-term memory network models and the decision fusion method, the model is built and trained using the signal feature data extracted under the expression-EEG bimodality to realize the final bimodal fusion emotion classification and identification research. Finally, the proposed method is verified based on the MAHNOB-HCI data set. Experimental results show that the proposed model can achieve a high recognition accuracy of 0.89, which can increase the accuracy of 8.51% compared with the traditional LSTM model. In terms of the running time of the identification method, the proposed method can effectively be shortened by about 20 s compared with the traditional method.
format Online
Article
Text
id pubmed-8172283
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-81722832021-06-11 Expression-EEG Bimodal Fusion Emotion Recognition Method Based on Deep Learning Lu, Yu Zhang, Hua Shi, Lei Yang, Fei Li, Jing Comput Math Methods Med Research Article As one of the key issues in the field of emotional computing, emotion recognition has rich application scenarios and important research value. However, the single biometric recognition in the actual scene has the problem of low accuracy of emotion recognition classification due to its own limitations. In response to this problem, this paper combines deep neural networks to propose a deep learning-based expression-EEG bimodal fusion emotion recognition method. This method is based on the improved VGG-FACE network model to realize the rapid extraction of facial expression features and shorten the training time of the network model. The wavelet soft threshold algorithm is used to remove artifacts from EEG signals to extract high-quality EEG signal features. Then, based on the long- and short-term memory network models and the decision fusion method, the model is built and trained using the signal feature data extracted under the expression-EEG bimodality to realize the final bimodal fusion emotion classification and identification research. Finally, the proposed method is verified based on the MAHNOB-HCI data set. Experimental results show that the proposed model can achieve a high recognition accuracy of 0.89, which can increase the accuracy of 8.51% compared with the traditional LSTM model. In terms of the running time of the identification method, the proposed method can effectively be shortened by about 20 s compared with the traditional method. Hindawi 2021-05-25 /pmc/articles/PMC8172283/ /pubmed/34122621 http://dx.doi.org/10.1155/2021/9940148 Text en Copyright © 2021 Yu Lu et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Lu, Yu
Zhang, Hua
Shi, Lei
Yang, Fei
Li, Jing
Expression-EEG Bimodal Fusion Emotion Recognition Method Based on Deep Learning
title Expression-EEG Bimodal Fusion Emotion Recognition Method Based on Deep Learning
title_full Expression-EEG Bimodal Fusion Emotion Recognition Method Based on Deep Learning
title_fullStr Expression-EEG Bimodal Fusion Emotion Recognition Method Based on Deep Learning
title_full_unstemmed Expression-EEG Bimodal Fusion Emotion Recognition Method Based on Deep Learning
title_short Expression-EEG Bimodal Fusion Emotion Recognition Method Based on Deep Learning
title_sort expression-eeg bimodal fusion emotion recognition method based on deep learning
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8172283/
https://www.ncbi.nlm.nih.gov/pubmed/34122621
http://dx.doi.org/10.1155/2021/9940148
work_keys_str_mv AT luyu expressioneegbimodalfusionemotionrecognitionmethodbasedondeeplearning
AT zhanghua expressioneegbimodalfusionemotionrecognitionmethodbasedondeeplearning
AT shilei expressioneegbimodalfusionemotionrecognitionmethodbasedondeeplearning
AT yangfei expressioneegbimodalfusionemotionrecognitionmethodbasedondeeplearning
AT lijing expressioneegbimodalfusionemotionrecognitionmethodbasedondeeplearning