Cargando…

A Study of Piano-Assisted Automated Accompaniment System Based on Heuristic Dynamic Planning

In this paper, a piano-assisted automated accompaniment system is designed and applied to a practical process using a heuristic dynamic planning approach. In this paper, we aim at the generation of piano vocal weaves in accompaniment from the perspective of assisting pop song writing, build an accom...

Descripción completa

Detalles Bibliográficos
Autores principales: Lin, Mengqian, Zhao, Rui
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9152395/
https://www.ncbi.nlm.nih.gov/pubmed/35655518
http://dx.doi.org/10.1155/2022/4999447
Descripción
Sumario:In this paper, a piano-assisted automated accompaniment system is designed and applied to a practical process using a heuristic dynamic planning approach. In this paper, we aim at the generation of piano vocal weaves in accompaniment from the perspective of assisting pop song writing, build an accompaniment piano generation tool through a set of systematic algorithm design and programming, and realize the generation of recognizable and numerous weaving styles within a controlled range under the same system. The mainstream music detection neural network approaches usually convert the problem into a similar way as image classification or sequence labelling and then use models such as convolutional neural networks or recurrent neural networks to solve the problem; however, the existing neural network approaches ignore the music relative loudness estimation subtask and ignore the inherent temporality of music data when solving the music detection task. However, the existing music generation neural network methods have not yet solved the problems of discrete integrability brought by piano roll representation music data and the still-limited control domain and variety of instruments generated in the controllable music generation task. To solve these two problems, this paper proposes a controlled music generation neural network model for multi-instrument polyphonic music. The effectiveness of the proposed model is verified by conducting several sets of experiments on the collected MIDICN data set, and the experimental results show that the model achieves better performance in the aspects of negative log-likelihood value, perplexity, musicality measure, domain similarity analysis, and manual evaluation.