Cargando…

Multi-Modal Fusion Emotion Recognition Method of Speech Expression Based on Deep Learning

The redundant information, noise data generated in the process of single-modal feature extraction, and traditional learning algorithms are difficult to obtain ideal recognition performance. A multi-modal fusion emotion recognition method for speech expressions based on deep learning is proposed. Fir...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Dong, Wang, Zhiyong, Wang, Lifeng, Chen, Longxi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8300695/
https://www.ncbi.nlm.nih.gov/pubmed/34305565
http://dx.doi.org/10.3389/fnbot.2021.697634
_version_ 1783726509219905536
author Liu, Dong
Wang, Zhiyong
Wang, Lifeng
Chen, Longxi
author_facet Liu, Dong
Wang, Zhiyong
Wang, Lifeng
Chen, Longxi
author_sort Liu, Dong
collection PubMed
description The redundant information, noise data generated in the process of single-modal feature extraction, and traditional learning algorithms are difficult to obtain ideal recognition performance. A multi-modal fusion emotion recognition method for speech expressions based on deep learning is proposed. Firstly, the corresponding feature extraction methods are set up for different single modalities. Among them, the voice uses the convolutional neural network-long and short term memory (CNN-LSTM) network, and the facial expression in the video uses the Inception-Res Net-v2 network to extract the feature data. Then, long and short term memory (LSTM) is used to capture the correlation between different modalities and within the modalities. After the feature selection process of the chi-square test, the single modalities are spliced to obtain a unified fusion feature. Finally, the fusion data features output by LSTM are used as the input of the classifier LIBSVM to realize the final emotion recognition. The experimental results show that the recognition accuracy of the proposed method on the MOSI and MELD datasets are 87.56 and 90.06%, respectively, which are better than other comparison methods. It has laid a certain theoretical foundation for the application of multimodal fusion in emotion recognition.
format Online
Article
Text
id pubmed-8300695
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-83006952021-07-24 Multi-Modal Fusion Emotion Recognition Method of Speech Expression Based on Deep Learning Liu, Dong Wang, Zhiyong Wang, Lifeng Chen, Longxi Front Neurorobot Neuroscience The redundant information, noise data generated in the process of single-modal feature extraction, and traditional learning algorithms are difficult to obtain ideal recognition performance. A multi-modal fusion emotion recognition method for speech expressions based on deep learning is proposed. Firstly, the corresponding feature extraction methods are set up for different single modalities. Among them, the voice uses the convolutional neural network-long and short term memory (CNN-LSTM) network, and the facial expression in the video uses the Inception-Res Net-v2 network to extract the feature data. Then, long and short term memory (LSTM) is used to capture the correlation between different modalities and within the modalities. After the feature selection process of the chi-square test, the single modalities are spliced to obtain a unified fusion feature. Finally, the fusion data features output by LSTM are used as the input of the classifier LIBSVM to realize the final emotion recognition. The experimental results show that the recognition accuracy of the proposed method on the MOSI and MELD datasets are 87.56 and 90.06%, respectively, which are better than other comparison methods. It has laid a certain theoretical foundation for the application of multimodal fusion in emotion recognition. Frontiers Media S.A. 2021-07-09 /pmc/articles/PMC8300695/ /pubmed/34305565 http://dx.doi.org/10.3389/fnbot.2021.697634 Text en Copyright © 2021 Liu, Wang, Wang and Chen. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Liu, Dong
Wang, Zhiyong
Wang, Lifeng
Chen, Longxi
Multi-Modal Fusion Emotion Recognition Method of Speech Expression Based on Deep Learning
title Multi-Modal Fusion Emotion Recognition Method of Speech Expression Based on Deep Learning
title_full Multi-Modal Fusion Emotion Recognition Method of Speech Expression Based on Deep Learning
title_fullStr Multi-Modal Fusion Emotion Recognition Method of Speech Expression Based on Deep Learning
title_full_unstemmed Multi-Modal Fusion Emotion Recognition Method of Speech Expression Based on Deep Learning
title_short Multi-Modal Fusion Emotion Recognition Method of Speech Expression Based on Deep Learning
title_sort multi-modal fusion emotion recognition method of speech expression based on deep learning
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8300695/
https://www.ncbi.nlm.nih.gov/pubmed/34305565
http://dx.doi.org/10.3389/fnbot.2021.697634
work_keys_str_mv AT liudong multimodalfusionemotionrecognitionmethodofspeechexpressionbasedondeeplearning
AT wangzhiyong multimodalfusionemotionrecognitionmethodofspeechexpressionbasedondeeplearning
AT wanglifeng multimodalfusionemotionrecognitionmethodofspeechexpressionbasedondeeplearning
AT chenlongxi multimodalfusionemotionrecognitionmethodofspeechexpressionbasedondeeplearning