Cargando…

Human-Computer Interaction with Detection of Speaker Emotions Using Convolution Neural Networks

Emotions play an essential role in human relationships, and many real-time applications rely on interpreting the speaker's emotion from their words. Speech emotion recognition (SER) modules aid human-computer interface (HCI) applications, but they are challenging to implement because of the lac...

Descripción completa

Detalles Bibliográficos
Autores principales: Alnuaim, Abeer Ali, Zakariah, Mohammed, Alhadlaq, Aseel, Shashidhar, Chitra, Hatamleh, Wesam Atef, Tarazi, Hussam, Shukla, Prashant Kumar, Ratna, Rajnish
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8989588/
https://www.ncbi.nlm.nih.gov/pubmed/35401731
http://dx.doi.org/10.1155/2022/7463091
Descripción
Sumario:Emotions play an essential role in human relationships, and many real-time applications rely on interpreting the speaker's emotion from their words. Speech emotion recognition (SER) modules aid human-computer interface (HCI) applications, but they are challenging to implement because of the lack of balanced data for training and clarity about which features are sufficient for categorization. This research discusses the impact of the classification approach, identifying the most appropriate combination of features and data augmentation on speech emotion detection accuracy. Selection of the correct combination of handcrafted features with the classifier plays an integral part in reducing computation complexity. The suggested classification model, a 1D convolutional neural network (1D CNN), outperforms traditional machine learning approaches in classification. Unlike most earlier studies, which examined emotions primarily through a single language lens, our analysis looks at numerous language data sets. With the most discriminating features and data augmentation, our technique achieves 97.09%, 96.44%, and 83.33% accuracy for the BAVED, ANAD, and SAVEE data sets, respectively.