Cargando…
Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition
This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5625811/ https://www.ncbi.nlm.nih.gov/pubmed/29056963 http://dx.doi.org/10.1155/2017/2107451 |
_version_ | 1783268459780505600 |
---|---|
author | Huang, Yongrui Yang, Jianhao Liao, Pengkai Pan, Jiahui |
author_facet | Huang, Yongrui Yang, Jianhao Liao, Pengkai Pan, Jiahui |
author_sort | Huang, Yongrui |
collection | PubMed |
description | This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources. |
format | Online Article Text |
id | pubmed-5625811 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-56258112017-10-22 Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition Huang, Yongrui Yang, Jianhao Liao, Pengkai Pan, Jiahui Comput Intell Neurosci Research Article This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources. Hindawi 2017 2017-09-19 /pmc/articles/PMC5625811/ /pubmed/29056963 http://dx.doi.org/10.1155/2017/2107451 Text en Copyright © 2017 Yongrui Huang et al. https://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Huang, Yongrui Yang, Jianhao Liao, Pengkai Pan, Jiahui Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition |
title | Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition |
title_full | Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition |
title_fullStr | Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition |
title_full_unstemmed | Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition |
title_short | Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition |
title_sort | fusion of facial expressions and eeg for multimodal emotion recognition |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5625811/ https://www.ncbi.nlm.nih.gov/pubmed/29056963 http://dx.doi.org/10.1155/2017/2107451 |
work_keys_str_mv | AT huangyongrui fusionoffacialexpressionsandeegformultimodalemotionrecognition AT yangjianhao fusionoffacialexpressionsandeegformultimodalemotionrecognition AT liaopengkai fusionoffacialexpressionsandeegformultimodalemotionrecognition AT panjiahui fusionoffacialexpressionsandeegformultimodalemotionrecognition |