Cargando…
Linking Multi-Layer Dynamical GCN With Style-Based Recalibration CNN for EEG-Based Emotion Recognition
Electroencephalography (EEG)-based emotion computing has become one of the research hotspots of human-computer interaction (HCI). However, it is difficult to effectively learn the interactions between brain regions in emotional states by using traditional convolutional neural networks because there...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8907537/ https://www.ncbi.nlm.nih.gov/pubmed/35280845 http://dx.doi.org/10.3389/fnbot.2022.834952 |
_version_ | 1784665667979444224 |
---|---|
author | Bao, Guangcheng Yang, Kai Tong, Li Shu, Jun Zhang, Rongkai Wang, Linyuan Yan, Bin Zeng, Ying |
author_facet | Bao, Guangcheng Yang, Kai Tong, Li Shu, Jun Zhang, Rongkai Wang, Linyuan Yan, Bin Zeng, Ying |
author_sort | Bao, Guangcheng |
collection | PubMed |
description | Electroencephalography (EEG)-based emotion computing has become one of the research hotspots of human-computer interaction (HCI). However, it is difficult to effectively learn the interactions between brain regions in emotional states by using traditional convolutional neural networks because there is information transmission between neurons, which constitutes the brain network structure. In this paper, we proposed a novel model combining graph convolutional network and convolutional neural network, namely MDGCN-SRCNN, aiming to fully extract features of channel connectivity in different receptive fields and deep layer abstract features to distinguish different emotions. Particularly, we add style-based recalibration module to CNN to extract deep layer features, which can better select features that are highly related to emotion. We conducted two individual experiments on SEED data set and SEED-IV data set, respectively, and the experiments proved the effectiveness of MDGCN-SRCNN model. The recognition accuracy on SEED and SEED-IV is 95.08 and 85.52%, respectively. Our model has better performance than other state-of-art methods. In addition, by visualizing the distribution of different layers features, we prove that the combination of shallow layer and deep layer features can effectively improve the recognition performance. Finally, we verified the important brain regions and the connection relationships between channels for emotion generation by analyzing the connection weights between channels after model learning. |
format | Online Article Text |
id | pubmed-8907537 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-89075372022-03-11 Linking Multi-Layer Dynamical GCN With Style-Based Recalibration CNN for EEG-Based Emotion Recognition Bao, Guangcheng Yang, Kai Tong, Li Shu, Jun Zhang, Rongkai Wang, Linyuan Yan, Bin Zeng, Ying Front Neurorobot Neuroscience Electroencephalography (EEG)-based emotion computing has become one of the research hotspots of human-computer interaction (HCI). However, it is difficult to effectively learn the interactions between brain regions in emotional states by using traditional convolutional neural networks because there is information transmission between neurons, which constitutes the brain network structure. In this paper, we proposed a novel model combining graph convolutional network and convolutional neural network, namely MDGCN-SRCNN, aiming to fully extract features of channel connectivity in different receptive fields and deep layer abstract features to distinguish different emotions. Particularly, we add style-based recalibration module to CNN to extract deep layer features, which can better select features that are highly related to emotion. We conducted two individual experiments on SEED data set and SEED-IV data set, respectively, and the experiments proved the effectiveness of MDGCN-SRCNN model. The recognition accuracy on SEED and SEED-IV is 95.08 and 85.52%, respectively. Our model has better performance than other state-of-art methods. In addition, by visualizing the distribution of different layers features, we prove that the combination of shallow layer and deep layer features can effectively improve the recognition performance. Finally, we verified the important brain regions and the connection relationships between channels for emotion generation by analyzing the connection weights between channels after model learning. Frontiers Media S.A. 2022-02-24 /pmc/articles/PMC8907537/ /pubmed/35280845 http://dx.doi.org/10.3389/fnbot.2022.834952 Text en Copyright © 2022 Bao, Yang, Tong, Shu, Zhang, Wang, Yan and Zeng. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Bao, Guangcheng Yang, Kai Tong, Li Shu, Jun Zhang, Rongkai Wang, Linyuan Yan, Bin Zeng, Ying Linking Multi-Layer Dynamical GCN With Style-Based Recalibration CNN for EEG-Based Emotion Recognition |
title | Linking Multi-Layer Dynamical GCN With Style-Based Recalibration CNN for EEG-Based Emotion Recognition |
title_full | Linking Multi-Layer Dynamical GCN With Style-Based Recalibration CNN for EEG-Based Emotion Recognition |
title_fullStr | Linking Multi-Layer Dynamical GCN With Style-Based Recalibration CNN for EEG-Based Emotion Recognition |
title_full_unstemmed | Linking Multi-Layer Dynamical GCN With Style-Based Recalibration CNN for EEG-Based Emotion Recognition |
title_short | Linking Multi-Layer Dynamical GCN With Style-Based Recalibration CNN for EEG-Based Emotion Recognition |
title_sort | linking multi-layer dynamical gcn with style-based recalibration cnn for eeg-based emotion recognition |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8907537/ https://www.ncbi.nlm.nih.gov/pubmed/35280845 http://dx.doi.org/10.3389/fnbot.2022.834952 |
work_keys_str_mv | AT baoguangcheng linkingmultilayerdynamicalgcnwithstylebasedrecalibrationcnnforeegbasedemotionrecognition AT yangkai linkingmultilayerdynamicalgcnwithstylebasedrecalibrationcnnforeegbasedemotionrecognition AT tongli linkingmultilayerdynamicalgcnwithstylebasedrecalibrationcnnforeegbasedemotionrecognition AT shujun linkingmultilayerdynamicalgcnwithstylebasedrecalibrationcnnforeegbasedemotionrecognition AT zhangrongkai linkingmultilayerdynamicalgcnwithstylebasedrecalibrationcnnforeegbasedemotionrecognition AT wanglinyuan linkingmultilayerdynamicalgcnwithstylebasedrecalibrationcnnforeegbasedemotionrecognition AT yanbin linkingmultilayerdynamicalgcnwithstylebasedrecalibrationcnnforeegbasedemotionrecognition AT zengying linkingmultilayerdynamicalgcnwithstylebasedrecalibrationcnnforeegbasedemotionrecognition |