Cargando…
Data Augmentation for EEG-Based Emotion Recognition Using Generative Adversarial Networks
One of the greatest limitations in the field of EEG-based emotion recognition is the lack of training samples, which makes it difficult to establish effective models for emotion recognition. Inspired by the excellent achievements of generative models in image processing, we propose a data augmentati...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8700963/ https://www.ncbi.nlm.nih.gov/pubmed/34955797 http://dx.doi.org/10.3389/fncom.2021.723843 |
_version_ | 1784620884629127168 |
---|---|
author | Bao, Guangcheng Yan, Bin Tong, Li Shu, Jun Wang, Linyuan Yang, Kai Zeng, Ying |
author_facet | Bao, Guangcheng Yan, Bin Tong, Li Shu, Jun Wang, Linyuan Yang, Kai Zeng, Ying |
author_sort | Bao, Guangcheng |
collection | PubMed |
description | One of the greatest limitations in the field of EEG-based emotion recognition is the lack of training samples, which makes it difficult to establish effective models for emotion recognition. Inspired by the excellent achievements of generative models in image processing, we propose a data augmentation model named VAE-D2GAN for EEG-based emotion recognition using a generative adversarial network. EEG features representing different emotions are extracted as topological maps of differential entropy (DE) under five classical frequency bands. The proposed model is designed to learn the distributions of these features for real EEG signals and generate artificial samples for training. The variational auto-encoder (VAE) architecture can learn the spatial distribution of the actual data through a latent vector, and is introduced into the dual discriminator GAN to improve the diversity of the generated artificial samples. To evaluate the performance of this model, we conduct a systematic test on two public emotion EEG datasets, the SEED and the SEED-IV. The obtained recognition accuracy of the method using data augmentation shows as 92.5 and 82.3%, respectively, on the SEED and SEED-IV datasets, which is 1.5 and 3.5% higher than that of methods without using data augmentation. The experimental results show that the artificial samples generated by our model can effectively enhance the performance of the EEG-based emotion recognition. |
format | Online Article Text |
id | pubmed-8700963 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-87009632021-12-24 Data Augmentation for EEG-Based Emotion Recognition Using Generative Adversarial Networks Bao, Guangcheng Yan, Bin Tong, Li Shu, Jun Wang, Linyuan Yang, Kai Zeng, Ying Front Comput Neurosci Neuroscience One of the greatest limitations in the field of EEG-based emotion recognition is the lack of training samples, which makes it difficult to establish effective models for emotion recognition. Inspired by the excellent achievements of generative models in image processing, we propose a data augmentation model named VAE-D2GAN for EEG-based emotion recognition using a generative adversarial network. EEG features representing different emotions are extracted as topological maps of differential entropy (DE) under five classical frequency bands. The proposed model is designed to learn the distributions of these features for real EEG signals and generate artificial samples for training. The variational auto-encoder (VAE) architecture can learn the spatial distribution of the actual data through a latent vector, and is introduced into the dual discriminator GAN to improve the diversity of the generated artificial samples. To evaluate the performance of this model, we conduct a systematic test on two public emotion EEG datasets, the SEED and the SEED-IV. The obtained recognition accuracy of the method using data augmentation shows as 92.5 and 82.3%, respectively, on the SEED and SEED-IV datasets, which is 1.5 and 3.5% higher than that of methods without using data augmentation. The experimental results show that the artificial samples generated by our model can effectively enhance the performance of the EEG-based emotion recognition. Frontiers Media S.A. 2021-12-09 /pmc/articles/PMC8700963/ /pubmed/34955797 http://dx.doi.org/10.3389/fncom.2021.723843 Text en Copyright © 2021 Bao, Yan, Tong, Shu, Wang, Yang and Zeng. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Bao, Guangcheng Yan, Bin Tong, Li Shu, Jun Wang, Linyuan Yang, Kai Zeng, Ying Data Augmentation for EEG-Based Emotion Recognition Using Generative Adversarial Networks |
title | Data Augmentation for EEG-Based Emotion Recognition Using Generative Adversarial Networks |
title_full | Data Augmentation for EEG-Based Emotion Recognition Using Generative Adversarial Networks |
title_fullStr | Data Augmentation for EEG-Based Emotion Recognition Using Generative Adversarial Networks |
title_full_unstemmed | Data Augmentation for EEG-Based Emotion Recognition Using Generative Adversarial Networks |
title_short | Data Augmentation for EEG-Based Emotion Recognition Using Generative Adversarial Networks |
title_sort | data augmentation for eeg-based emotion recognition using generative adversarial networks |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8700963/ https://www.ncbi.nlm.nih.gov/pubmed/34955797 http://dx.doi.org/10.3389/fncom.2021.723843 |
work_keys_str_mv | AT baoguangcheng dataaugmentationforeegbasedemotionrecognitionusinggenerativeadversarialnetworks AT yanbin dataaugmentationforeegbasedemotionrecognitionusinggenerativeadversarialnetworks AT tongli dataaugmentationforeegbasedemotionrecognitionusinggenerativeadversarialnetworks AT shujun dataaugmentationforeegbasedemotionrecognitionusinggenerativeadversarialnetworks AT wanglinyuan dataaugmentationforeegbasedemotionrecognitionusinggenerativeadversarialnetworks AT yangkai dataaugmentationforeegbasedemotionrecognitionusinggenerativeadversarialnetworks AT zengying dataaugmentationforeegbasedemotionrecognitionusinggenerativeadversarialnetworks |