Cargando…
Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition
The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recur...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9331177/ https://www.ncbi.nlm.nih.gov/pubmed/35893005 http://dx.doi.org/10.3390/e24081025 |
_version_ | 1784758338037219328 |
---|---|
author | Tao, Huawei Geng, Lei Shan, Shuai Mai, Jingchao Fu, Hongliang |
author_facet | Tao, Huawei Geng, Lei Shan, Shuai Mai, Jingchao Fu, Hongliang |
author_sort | Tao, Huawei |
collection | PubMed |
description | The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recurrent neural network based on attention mechanism (MSCRNN-A) is proposed. Firstly, a multi-stream sub-branches full convolution network (MSFCN) based on AlexNet is presented to limit the loss of emotional information. In MSFCN, sub-branches are added behind each pooling layer to retain the features of different resolutions, different features from which are fused by adding. Secondly, the MSFCN and Bi-LSTM network are combined to form a hybrid network to extract speech emotion features for the purpose of supplying the temporal structure information of emotional features. Finally, a feature fusion model based on a multi-head attention mechanism is developed to achieve the best fusion features. The proposed method uses an attention mechanism to calculate the contribution degree of different network features, and thereafter realizes the adaptive fusion of different network features by weighting different network features. Aiming to restrain the gradient divergence of the network, different network features and fusion features are connected through shortcut connection to obtain fusion features for recognition. The experimental results on three conventional SER corpora, CASIA, EMODB, and SAVEE, show that our proposed method significantly improves the network recognition performance, with a recognition rate superior to most of the existing state-of-the-art methods. |
format | Online Article Text |
id | pubmed-9331177 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-93311772022-07-29 Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition Tao, Huawei Geng, Lei Shan, Shuai Mai, Jingchao Fu, Hongliang Entropy (Basel) Article The quality of feature extraction plays a significant role in the performance of speech emotion recognition. In order to extract discriminative, affect-salient features from speech signals and then improve the performance of speech emotion recognition, in this paper, a multi-stream convolution-recurrent neural network based on attention mechanism (MSCRNN-A) is proposed. Firstly, a multi-stream sub-branches full convolution network (MSFCN) based on AlexNet is presented to limit the loss of emotional information. In MSFCN, sub-branches are added behind each pooling layer to retain the features of different resolutions, different features from which are fused by adding. Secondly, the MSFCN and Bi-LSTM network are combined to form a hybrid network to extract speech emotion features for the purpose of supplying the temporal structure information of emotional features. Finally, a feature fusion model based on a multi-head attention mechanism is developed to achieve the best fusion features. The proposed method uses an attention mechanism to calculate the contribution degree of different network features, and thereafter realizes the adaptive fusion of different network features by weighting different network features. Aiming to restrain the gradient divergence of the network, different network features and fusion features are connected through shortcut connection to obtain fusion features for recognition. The experimental results on three conventional SER corpora, CASIA, EMODB, and SAVEE, show that our proposed method significantly improves the network recognition performance, with a recognition rate superior to most of the existing state-of-the-art methods. MDPI 2022-07-26 /pmc/articles/PMC9331177/ /pubmed/35893005 http://dx.doi.org/10.3390/e24081025 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Tao, Huawei Geng, Lei Shan, Shuai Mai, Jingchao Fu, Hongliang Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition |
title | Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition |
title_full | Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition |
title_fullStr | Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition |
title_full_unstemmed | Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition |
title_short | Multi-Stream Convolution-Recurrent Neural Networks Based on Attention Mechanism Fusion for Speech Emotion Recognition |
title_sort | multi-stream convolution-recurrent neural networks based on attention mechanism fusion for speech emotion recognition |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9331177/ https://www.ncbi.nlm.nih.gov/pubmed/35893005 http://dx.doi.org/10.3390/e24081025 |
work_keys_str_mv | AT taohuawei multistreamconvolutionrecurrentneuralnetworksbasedonattentionmechanismfusionforspeechemotionrecognition AT genglei multistreamconvolutionrecurrentneuralnetworksbasedonattentionmechanismfusionforspeechemotionrecognition AT shanshuai multistreamconvolutionrecurrentneuralnetworksbasedonattentionmechanismfusionforspeechemotionrecognition AT maijingchao multistreamconvolutionrecurrentneuralnetworksbasedonattentionmechanismfusionforspeechemotionrecognition AT fuhongliang multistreamconvolutionrecurrentneuralnetworksbasedonattentionmechanismfusionforspeechemotionrecognition |