Cargando…
A Music Emotion Classification Model Based on the Improved Convolutional Neural Network
Aiming at the problems of music emotion classification, a music emotion recognition method based on the convolutional neural network is proposed. First, the mel-frequency cepstral coefficient (MFCC) and residual phase (RP) are weighted and combined to extract the audio low-level features of music, s...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8860518/ https://www.ncbi.nlm.nih.gov/pubmed/35198020 http://dx.doi.org/10.1155/2022/6749622 |
_version_ | 1784654692401283072 |
---|---|
author | Jia, Xiaosong |
author_facet | Jia, Xiaosong |
author_sort | Jia, Xiaosong |
collection | PubMed |
description | Aiming at the problems of music emotion classification, a music emotion recognition method based on the convolutional neural network is proposed. First, the mel-frequency cepstral coefficient (MFCC) and residual phase (RP) are weighted and combined to extract the audio low-level features of music, so as to improve the efficiency of data mining. Then, the spectrogram is input into the convolutional recurrent neural network (CRNN) to extract the time-domain features, frequency-domain features, and sequence features of audio. At the same time, the low-level features of audio are input into the bidirectional long short-term memory (Bi-LSTM) network to further obtain the sequence information of audio features. Finally, the two parts of features are fused and input into the softmax classification function with the center loss function to achieve the recognition of four music emotions. The experimental results based on the emotion music dataset show that the recognition accuracy of the proposed method is 92.06%, and the value of the loss function is about 0.98, both of which are better than other methods. The proposed method provides a new feasible idea for the development of music emotion recognition. |
format | Online Article Text |
id | pubmed-8860518 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-88605182022-02-22 A Music Emotion Classification Model Based on the Improved Convolutional Neural Network Jia, Xiaosong Comput Intell Neurosci Research Article Aiming at the problems of music emotion classification, a music emotion recognition method based on the convolutional neural network is proposed. First, the mel-frequency cepstral coefficient (MFCC) and residual phase (RP) are weighted and combined to extract the audio low-level features of music, so as to improve the efficiency of data mining. Then, the spectrogram is input into the convolutional recurrent neural network (CRNN) to extract the time-domain features, frequency-domain features, and sequence features of audio. At the same time, the low-level features of audio are input into the bidirectional long short-term memory (Bi-LSTM) network to further obtain the sequence information of audio features. Finally, the two parts of features are fused and input into the softmax classification function with the center loss function to achieve the recognition of four music emotions. The experimental results based on the emotion music dataset show that the recognition accuracy of the proposed method is 92.06%, and the value of the loss function is about 0.98, both of which are better than other methods. The proposed method provides a new feasible idea for the development of music emotion recognition. Hindawi 2022-02-14 /pmc/articles/PMC8860518/ /pubmed/35198020 http://dx.doi.org/10.1155/2022/6749622 Text en Copyright © 2022 Xiaosong Jia. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Jia, Xiaosong A Music Emotion Classification Model Based on the Improved Convolutional Neural Network |
title | A Music Emotion Classification Model Based on the Improved Convolutional Neural Network |
title_full | A Music Emotion Classification Model Based on the Improved Convolutional Neural Network |
title_fullStr | A Music Emotion Classification Model Based on the Improved Convolutional Neural Network |
title_full_unstemmed | A Music Emotion Classification Model Based on the Improved Convolutional Neural Network |
title_short | A Music Emotion Classification Model Based on the Improved Convolutional Neural Network |
title_sort | music emotion classification model based on the improved convolutional neural network |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8860518/ https://www.ncbi.nlm.nih.gov/pubmed/35198020 http://dx.doi.org/10.1155/2022/6749622 |
work_keys_str_mv | AT jiaxiaosong amusicemotionclassificationmodelbasedontheimprovedconvolutionalneuralnetwork AT jiaxiaosong musicemotionclassificationmodelbasedontheimprovedconvolutionalneuralnetwork |