Cargando…
Construction of Music Intelligent Creation Model Based on Convolutional Neural Network
The application of machine learning technology to intelligent music creation has become a very important field in music creation. The main current research on music intelligent creation methods uses fixed coding steps in audio data, which lead to weak feature expression ability. Based on convolution...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9276503/ https://www.ncbi.nlm.nih.gov/pubmed/35837219 http://dx.doi.org/10.1155/2022/2854066 |
_version_ | 1784745744341663744 |
---|---|
author | Chen, Jing |
author_facet | Chen, Jing |
author_sort | Chen, Jing |
collection | PubMed |
description | The application of machine learning technology to intelligent music creation has become a very important field in music creation. The main current research on music intelligent creation methods uses fixed coding steps in audio data, which lead to weak feature expression ability. Based on convolutional neural network theory, this paper proposes a deep music intelligent creation method. The model uses a convolutional recurrent neural network to generate an effective hash code, first preprocesses the music signal to obtain a Mel spectrogram, and then inputs it into a pretrained CNN to extract from its convolutional layers. The network space details and the semantic information of musical symbols are used to construct the feature map sequence using selection strategy for the feature map of each convolutional layer, so as to solve the problem of high data feature dimension and poor recognition performance. In the simulation process, the Mel cepstral coefficient method (MFCC) was used to extract the features of four different music signals, and the features that could represent each signal were extracted through the convolutional neural network, and the continuous signals were discretized and reduced. The experimental results show that the high-dimensional music data are dimensionally reduced at the data level. After the data are compressed, the correct rate of intelligent creation is as high as 98%, and the characteristic signal distortion rate is reduced to 5% below, effectively improving the algorithm performance and the ability to create music intelligently. |
format | Online Article Text |
id | pubmed-9276503 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-92765032022-07-13 Construction of Music Intelligent Creation Model Based on Convolutional Neural Network Chen, Jing Comput Intell Neurosci Research Article The application of machine learning technology to intelligent music creation has become a very important field in music creation. The main current research on music intelligent creation methods uses fixed coding steps in audio data, which lead to weak feature expression ability. Based on convolutional neural network theory, this paper proposes a deep music intelligent creation method. The model uses a convolutional recurrent neural network to generate an effective hash code, first preprocesses the music signal to obtain a Mel spectrogram, and then inputs it into a pretrained CNN to extract from its convolutional layers. The network space details and the semantic information of musical symbols are used to construct the feature map sequence using selection strategy for the feature map of each convolutional layer, so as to solve the problem of high data feature dimension and poor recognition performance. In the simulation process, the Mel cepstral coefficient method (MFCC) was used to extract the features of four different music signals, and the features that could represent each signal were extracted through the convolutional neural network, and the continuous signals were discretized and reduced. The experimental results show that the high-dimensional music data are dimensionally reduced at the data level. After the data are compressed, the correct rate of intelligent creation is as high as 98%, and the characteristic signal distortion rate is reduced to 5% below, effectively improving the algorithm performance and the ability to create music intelligently. Hindawi 2022-07-05 /pmc/articles/PMC9276503/ /pubmed/35837219 http://dx.doi.org/10.1155/2022/2854066 Text en Copyright © 2022 Jing Chen. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Chen, Jing Construction of Music Intelligent Creation Model Based on Convolutional Neural Network |
title | Construction of Music Intelligent Creation Model Based on Convolutional Neural Network |
title_full | Construction of Music Intelligent Creation Model Based on Convolutional Neural Network |
title_fullStr | Construction of Music Intelligent Creation Model Based on Convolutional Neural Network |
title_full_unstemmed | Construction of Music Intelligent Creation Model Based on Convolutional Neural Network |
title_short | Construction of Music Intelligent Creation Model Based on Convolutional Neural Network |
title_sort | construction of music intelligent creation model based on convolutional neural network |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9276503/ https://www.ncbi.nlm.nih.gov/pubmed/35837219 http://dx.doi.org/10.1155/2022/2854066 |
work_keys_str_mv | AT chenjing constructionofmusicintelligentcreationmodelbasedonconvolutionalneuralnetwork |