Cargando…

An improved multi-input deep convolutional neural network for automatic emotion recognition

Current decoding algorithms based on a one-dimensional (1D) convolutional neural network (CNN) have shown effectiveness in the automatic recognition of emotional tasks using physiological signals. However, these recognition models usually take a single modal of physiological signal as input, and the...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Peiji, Zou, Bochao, Belkacem, Abdelkader Nasreddine, Lyu, Xiangwen, Zhao, Xixi, Yi, Weibo, Huang, Zhaoyang, Liang, Jun, Chen, Chao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9577494/
https://www.ncbi.nlm.nih.gov/pubmed/36267236
http://dx.doi.org/10.3389/fnins.2022.965871
_version_ 1784811767527899136
author Chen, Peiji
Zou, Bochao
Belkacem, Abdelkader Nasreddine
Lyu, Xiangwen
Zhao, Xixi
Yi, Weibo
Huang, Zhaoyang
Liang, Jun
Chen, Chao
author_facet Chen, Peiji
Zou, Bochao
Belkacem, Abdelkader Nasreddine
Lyu, Xiangwen
Zhao, Xixi
Yi, Weibo
Huang, Zhaoyang
Liang, Jun
Chen, Chao
author_sort Chen, Peiji
collection PubMed
description Current decoding algorithms based on a one-dimensional (1D) convolutional neural network (CNN) have shown effectiveness in the automatic recognition of emotional tasks using physiological signals. However, these recognition models usually take a single modal of physiological signal as input, and the inter-correlates between different modalities of physiological signals are completely ignored, which could be an important source of information for emotion recognition. Therefore, a complete end-to-end multi-input deep convolutional neural network (MI-DCNN) structure was designed in this study. The newly designed 1D-CNN structure can take full advantage of multi-modal physiological signals and automatically complete the process from feature extraction to emotion classification simultaneously. To evaluate the effectiveness of the proposed model, we designed an emotion elicitation experiment and collected a total of 52 participants' physiological signals including electrocardiography (ECG), electrodermal activity (EDA), and respiratory activity (RSP) while watching emotion elicitation videos. Subsequently, traditional machine learning methods were applied as baseline comparisons; for arousal, the baseline accuracy and f1-score of our dataset were 62.9 ± 0.9% and 0.628 ± 0.01, respectively; for valence, the baseline accuracy and f1-score of our dataset were 60.3 ± 0.8% and 0.600 ± 0.01, respectively. Differences between the MI-DCNN and single-input DCNN were also compared, and the proposed method was verified on two public datasets (DEAP and DREAMER) as well as our dataset. The computing results in our dataset showed a significant improvement in both tasks compared to traditional machine learning methods (t-test, arousal: p = 9.7E-03 < 0.01, valence: 6.5E-03 < 0.01), which demonstrated the strength of introducing a multi-input convolutional neural network for emotion recognition based on multi-modal physiological signals.
format Online
Article
Text
id pubmed-9577494
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-95774942022-10-19 An improved multi-input deep convolutional neural network for automatic emotion recognition Chen, Peiji Zou, Bochao Belkacem, Abdelkader Nasreddine Lyu, Xiangwen Zhao, Xixi Yi, Weibo Huang, Zhaoyang Liang, Jun Chen, Chao Front Neurosci Neuroscience Current decoding algorithms based on a one-dimensional (1D) convolutional neural network (CNN) have shown effectiveness in the automatic recognition of emotional tasks using physiological signals. However, these recognition models usually take a single modal of physiological signal as input, and the inter-correlates between different modalities of physiological signals are completely ignored, which could be an important source of information for emotion recognition. Therefore, a complete end-to-end multi-input deep convolutional neural network (MI-DCNN) structure was designed in this study. The newly designed 1D-CNN structure can take full advantage of multi-modal physiological signals and automatically complete the process from feature extraction to emotion classification simultaneously. To evaluate the effectiveness of the proposed model, we designed an emotion elicitation experiment and collected a total of 52 participants' physiological signals including electrocardiography (ECG), electrodermal activity (EDA), and respiratory activity (RSP) while watching emotion elicitation videos. Subsequently, traditional machine learning methods were applied as baseline comparisons; for arousal, the baseline accuracy and f1-score of our dataset were 62.9 ± 0.9% and 0.628 ± 0.01, respectively; for valence, the baseline accuracy and f1-score of our dataset were 60.3 ± 0.8% and 0.600 ± 0.01, respectively. Differences between the MI-DCNN and single-input DCNN were also compared, and the proposed method was verified on two public datasets (DEAP and DREAMER) as well as our dataset. The computing results in our dataset showed a significant improvement in both tasks compared to traditional machine learning methods (t-test, arousal: p = 9.7E-03 < 0.01, valence: 6.5E-03 < 0.01), which demonstrated the strength of introducing a multi-input convolutional neural network for emotion recognition based on multi-modal physiological signals. Frontiers Media S.A. 2022-10-04 /pmc/articles/PMC9577494/ /pubmed/36267236 http://dx.doi.org/10.3389/fnins.2022.965871 Text en Copyright © 2022 Chen, Zou, Belkacem, Lyu, Zhao, Yi, Huang, Liang and Chen. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Chen, Peiji
Zou, Bochao
Belkacem, Abdelkader Nasreddine
Lyu, Xiangwen
Zhao, Xixi
Yi, Weibo
Huang, Zhaoyang
Liang, Jun
Chen, Chao
An improved multi-input deep convolutional neural network for automatic emotion recognition
title An improved multi-input deep convolutional neural network for automatic emotion recognition
title_full An improved multi-input deep convolutional neural network for automatic emotion recognition
title_fullStr An improved multi-input deep convolutional neural network for automatic emotion recognition
title_full_unstemmed An improved multi-input deep convolutional neural network for automatic emotion recognition
title_short An improved multi-input deep convolutional neural network for automatic emotion recognition
title_sort improved multi-input deep convolutional neural network for automatic emotion recognition
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9577494/
https://www.ncbi.nlm.nih.gov/pubmed/36267236
http://dx.doi.org/10.3389/fnins.2022.965871
work_keys_str_mv AT chenpeiji animprovedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT zoubochao animprovedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT belkacemabdelkadernasreddine animprovedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT lyuxiangwen animprovedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT zhaoxixi animprovedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT yiweibo animprovedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT huangzhaoyang animprovedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT liangjun animprovedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT chenchao animprovedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT chenpeiji improvedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT zoubochao improvedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT belkacemabdelkadernasreddine improvedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT lyuxiangwen improvedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT zhaoxixi improvedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT yiweibo improvedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT huangzhaoyang improvedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT liangjun improvedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition
AT chenchao improvedmultiinputdeepconvolutionalneuralnetworkforautomaticemotionrecognition