Cargando…
Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals
In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one’s emotional states. To circumv...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9141449/ https://www.ncbi.nlm.nih.gov/pubmed/35626462 http://dx.doi.org/10.3390/e24050577 |
_version_ | 1784715348940947456 |
---|---|
author | Luo, Junhai Tian, Yuxin Yu, Hang Chen, Yu Wu, Man |
author_facet | Luo, Junhai Tian, Yuxin Yu, Hang Chen, Yu Wu, Man |
author_sort | Luo, Junhai |
collection | PubMed |
description | In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one’s emotional states. To circumvent the labor of artificially designing features, we propose to acquire affective and robust representations automatically through the Stacked Denoising Autoencoder (SDA) architecture with unsupervised pre-training, followed by supervised fine-tuning. In this paper, we compare the performances of different features and models through three binary classification tasks based on the Valence-Arousal-Dominance (VAD) affection model. Decision fusion and feature fusion of electroencephalogram (EEG) and peripheral signals are performed on hand-engineered features; data-level fusion is performed on deep-learning methods. It turns out that the fusion data perform better than the two modalities. To take advantage of deep-learning algorithms, we augment the original data and feed it directly into our training model. We use two deep architectures and another generative stacked semi-supervised architecture as references for comparison to test the method’s practical effects. The results reveal that our scheme slightly outperforms the other three deep feature extractors and surpasses the state-of-the-art of hand-engineered features. |
format | Online Article Text |
id | pubmed-9141449 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-91414492022-05-28 Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals Luo, Junhai Tian, Yuxin Yu, Hang Chen, Yu Wu, Man Entropy (Basel) Article In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one’s emotional states. To circumvent the labor of artificially designing features, we propose to acquire affective and robust representations automatically through the Stacked Denoising Autoencoder (SDA) architecture with unsupervised pre-training, followed by supervised fine-tuning. In this paper, we compare the performances of different features and models through three binary classification tasks based on the Valence-Arousal-Dominance (VAD) affection model. Decision fusion and feature fusion of electroencephalogram (EEG) and peripheral signals are performed on hand-engineered features; data-level fusion is performed on deep-learning methods. It turns out that the fusion data perform better than the two modalities. To take advantage of deep-learning algorithms, we augment the original data and feed it directly into our training model. We use two deep architectures and another generative stacked semi-supervised architecture as references for comparison to test the method’s practical effects. The results reveal that our scheme slightly outperforms the other three deep feature extractors and surpasses the state-of-the-art of hand-engineered features. MDPI 2022-04-20 /pmc/articles/PMC9141449/ /pubmed/35626462 http://dx.doi.org/10.3390/e24050577 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Luo, Junhai Tian, Yuxin Yu, Hang Chen, Yu Wu, Man Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals |
title | Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals |
title_full | Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals |
title_fullStr | Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals |
title_full_unstemmed | Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals |
title_short | Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals |
title_sort | semi-supervised cross-subject emotion recognition based on stacked denoising autoencoder architecture using a fusion of multi-modal physiological signals |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9141449/ https://www.ncbi.nlm.nih.gov/pubmed/35626462 http://dx.doi.org/10.3390/e24050577 |
work_keys_str_mv | AT luojunhai semisupervisedcrosssubjectemotionrecognitionbasedonstackeddenoisingautoencoderarchitectureusingafusionofmultimodalphysiologicalsignals AT tianyuxin semisupervisedcrosssubjectemotionrecognitionbasedonstackeddenoisingautoencoderarchitectureusingafusionofmultimodalphysiologicalsignals AT yuhang semisupervisedcrosssubjectemotionrecognitionbasedonstackeddenoisingautoencoderarchitectureusingafusionofmultimodalphysiologicalsignals AT chenyu semisupervisedcrosssubjectemotionrecognitionbasedonstackeddenoisingautoencoderarchitectureusingafusionofmultimodalphysiologicalsignals AT wuman semisupervisedcrosssubjectemotionrecognitionbasedonstackeddenoisingautoencoderarchitectureusingafusionofmultimodalphysiologicalsignals |