Cargando…
Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer
Speech emotion recognition (SER) is a challenging task in human–computer interaction (HCI) systems. One of the key challenges in speech emotion recognition is to extract the emotional features effectively from a speech utterance. Despite the promising results of recent studies, they generally do not...
Autores principales: | , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10346498/ https://www.ncbi.nlm.nih.gov/pubmed/37448062 http://dx.doi.org/10.3390/s23136212 |
_version_ | 1785073327812902912 |
---|---|
author | Ullah, Rizwan Asif, Muhammad Shah, Wahab Ali Anjam, Fakhar Ullah, Ibrar Khurshaid, Tahir Wuttisittikulkij, Lunchakorn Shah, Shashi Ali, Syed Mansoor Alibakhshikenari, Mohammad |
author_facet | Ullah, Rizwan Asif, Muhammad Shah, Wahab Ali Anjam, Fakhar Ullah, Ibrar Khurshaid, Tahir Wuttisittikulkij, Lunchakorn Shah, Shashi Ali, Syed Mansoor Alibakhshikenari, Mohammad |
author_sort | Ullah, Rizwan |
collection | PubMed |
description | Speech emotion recognition (SER) is a challenging task in human–computer interaction (HCI) systems. One of the key challenges in speech emotion recognition is to extract the emotional features effectively from a speech utterance. Despite the promising results of recent studies, they generally do not leverage advanced fusion algorithms for the generation of effective representations of emotional features in speech utterances. To address this problem, we describe the fusion of spatial and temporal feature representations of speech emotion by parallelizing convolutional neural networks (CNNs) and a Transformer encoder for SER. We stack two parallel CNNs for spatial feature representation in parallel to a Transformer encoder for temporal feature representation, thereby simultaneously expanding the filter depth and reducing the feature map with an expressive hierarchical feature representation at a lower computational cost. We use the RAVDESS dataset to recognize eight different speech emotions. We augment and intensify the variations in the dataset to minimize model overfitting. Additive White Gaussian Noise (AWGN) is used to augment the RAVDESS dataset. With the spatial and sequential feature representations of CNNs and the Transformer, the SER model achieves 82.31% accuracy for eight emotions on a hold-out dataset. In addition, the SER system is evaluated with the IEMOCAP dataset and achieves 79.42% recognition accuracy for five emotions. Experimental results on the RAVDESS and IEMOCAP datasets show the success of the presented SER system and demonstrate an absolute performance improvement over the state-of-the-art (SOTA) models. |
format | Online Article Text |
id | pubmed-10346498 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-103464982023-07-15 Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer Ullah, Rizwan Asif, Muhammad Shah, Wahab Ali Anjam, Fakhar Ullah, Ibrar Khurshaid, Tahir Wuttisittikulkij, Lunchakorn Shah, Shashi Ali, Syed Mansoor Alibakhshikenari, Mohammad Sensors (Basel) Article Speech emotion recognition (SER) is a challenging task in human–computer interaction (HCI) systems. One of the key challenges in speech emotion recognition is to extract the emotional features effectively from a speech utterance. Despite the promising results of recent studies, they generally do not leverage advanced fusion algorithms for the generation of effective representations of emotional features in speech utterances. To address this problem, we describe the fusion of spatial and temporal feature representations of speech emotion by parallelizing convolutional neural networks (CNNs) and a Transformer encoder for SER. We stack two parallel CNNs for spatial feature representation in parallel to a Transformer encoder for temporal feature representation, thereby simultaneously expanding the filter depth and reducing the feature map with an expressive hierarchical feature representation at a lower computational cost. We use the RAVDESS dataset to recognize eight different speech emotions. We augment and intensify the variations in the dataset to minimize model overfitting. Additive White Gaussian Noise (AWGN) is used to augment the RAVDESS dataset. With the spatial and sequential feature representations of CNNs and the Transformer, the SER model achieves 82.31% accuracy for eight emotions on a hold-out dataset. In addition, the SER system is evaluated with the IEMOCAP dataset and achieves 79.42% recognition accuracy for five emotions. Experimental results on the RAVDESS and IEMOCAP datasets show the success of the presented SER system and demonstrate an absolute performance improvement over the state-of-the-art (SOTA) models. MDPI 2023-07-07 /pmc/articles/PMC10346498/ /pubmed/37448062 http://dx.doi.org/10.3390/s23136212 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Ullah, Rizwan Asif, Muhammad Shah, Wahab Ali Anjam, Fakhar Ullah, Ibrar Khurshaid, Tahir Wuttisittikulkij, Lunchakorn Shah, Shashi Ali, Syed Mansoor Alibakhshikenari, Mohammad Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer |
title | Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer |
title_full | Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer |
title_fullStr | Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer |
title_full_unstemmed | Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer |
title_short | Speech Emotion Recognition Using Convolution Neural Networks and Multi-Head Convolutional Transformer |
title_sort | speech emotion recognition using convolution neural networks and multi-head convolutional transformer |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10346498/ https://www.ncbi.nlm.nih.gov/pubmed/37448062 http://dx.doi.org/10.3390/s23136212 |
work_keys_str_mv | AT ullahrizwan speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer AT asifmuhammad speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer AT shahwahabali speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer AT anjamfakhar speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer AT ullahibrar speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer AT khurshaidtahir speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer AT wuttisittikulkijlunchakorn speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer AT shahshashi speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer AT alisyedmansoor speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer AT alibakhshikenarimohammad speechemotionrecognitionusingconvolutionneuralnetworksandmultiheadconvolutionaltransformer |