Cargando…
Intelligent auxiliary system for music performance under edge computing and long short-term recurrent neural networks
Music performance action generation can be applied in multiple real-world scenarios as a research hotspot in computer vision and cross-sequence analysis. However, the current generation methods of music performance actions have consistently ignored the connection between music and performance action...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10166492/ https://www.ncbi.nlm.nih.gov/pubmed/37155635 http://dx.doi.org/10.1371/journal.pone.0285496 |
_version_ | 1785038453774221312 |
---|---|
author | Wang, Yi |
author_facet | Wang, Yi |
author_sort | Wang, Yi |
collection | PubMed |
description | Music performance action generation can be applied in multiple real-world scenarios as a research hotspot in computer vision and cross-sequence analysis. However, the current generation methods of music performance actions have consistently ignored the connection between music and performance actions, resulting in a strong sense of separation between visual and auditory content. This paper first analyzes the attention mechanism, Recurrent Neural Network (RNN), and long and short-term RNN. The long and short-term RNN is suitable for sequence data with a strong temporal correlation. Based on this, the current learning method is improved. A new model that combines attention mechanisms and long and short-term RNN is proposed, which can generate performance actions based on music beat sequences. In addition, image description generative models with attention mechanisms are adopted technically. Combined with the RNN abstract structure that does not consider recursion, the abstract network structure of RNN-Long Short-Term Memory (LSTM) is optimized. Through music beat recognition and dance movement extraction technology, data resources are allocated and adjusted in the edge server architecture. The metric for experimental results and evaluation is the model loss function value. The superiority of the proposed model is mainly reflected in the high accuracy and low consumption rate of dance movement recognition. The experimental results show that the result of the loss function of the model is at least 0.00026, and the video effect is the best when the number of layers of the LSTM module in the model is 3, the node value is 256, and the Lookback value is 15. The new model can generate harmonious and prosperous performance action sequences based on ensuring the stability of performance action generation compared with the other three models of cross-domain sequence analysis. The new model has an excellent performance in combining music and performance actions. This paper has practical reference value for promoting the application of edge computing technology in intelligent auxiliary systems for music performance. |
format | Online Article Text |
id | pubmed-10166492 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-101664922023-05-09 Intelligent auxiliary system for music performance under edge computing and long short-term recurrent neural networks Wang, Yi PLoS One Research Article Music performance action generation can be applied in multiple real-world scenarios as a research hotspot in computer vision and cross-sequence analysis. However, the current generation methods of music performance actions have consistently ignored the connection between music and performance actions, resulting in a strong sense of separation between visual and auditory content. This paper first analyzes the attention mechanism, Recurrent Neural Network (RNN), and long and short-term RNN. The long and short-term RNN is suitable for sequence data with a strong temporal correlation. Based on this, the current learning method is improved. A new model that combines attention mechanisms and long and short-term RNN is proposed, which can generate performance actions based on music beat sequences. In addition, image description generative models with attention mechanisms are adopted technically. Combined with the RNN abstract structure that does not consider recursion, the abstract network structure of RNN-Long Short-Term Memory (LSTM) is optimized. Through music beat recognition and dance movement extraction technology, data resources are allocated and adjusted in the edge server architecture. The metric for experimental results and evaluation is the model loss function value. The superiority of the proposed model is mainly reflected in the high accuracy and low consumption rate of dance movement recognition. The experimental results show that the result of the loss function of the model is at least 0.00026, and the video effect is the best when the number of layers of the LSTM module in the model is 3, the node value is 256, and the Lookback value is 15. The new model can generate harmonious and prosperous performance action sequences based on ensuring the stability of performance action generation compared with the other three models of cross-domain sequence analysis. The new model has an excellent performance in combining music and performance actions. This paper has practical reference value for promoting the application of edge computing technology in intelligent auxiliary systems for music performance. Public Library of Science 2023-05-08 /pmc/articles/PMC10166492/ /pubmed/37155635 http://dx.doi.org/10.1371/journal.pone.0285496 Text en © 2023 Yi Wang https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Wang, Yi Intelligent auxiliary system for music performance under edge computing and long short-term recurrent neural networks |
title | Intelligent auxiliary system for music performance under edge computing and long short-term recurrent neural networks |
title_full | Intelligent auxiliary system for music performance under edge computing and long short-term recurrent neural networks |
title_fullStr | Intelligent auxiliary system for music performance under edge computing and long short-term recurrent neural networks |
title_full_unstemmed | Intelligent auxiliary system for music performance under edge computing and long short-term recurrent neural networks |
title_short | Intelligent auxiliary system for music performance under edge computing and long short-term recurrent neural networks |
title_sort | intelligent auxiliary system for music performance under edge computing and long short-term recurrent neural networks |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10166492/ https://www.ncbi.nlm.nih.gov/pubmed/37155635 http://dx.doi.org/10.1371/journal.pone.0285496 |
work_keys_str_mv | AT wangyi intelligentauxiliarysystemformusicperformanceunderedgecomputingandlongshorttermrecurrentneuralnetworks |