Cargando…
The use of deep learning technology in dance movement generation
The dance generated by the traditional music action matching and statistical mapping models is less consistent with the music itself. Moreover, new dance movements cannot be generated. A dance movement generation algorithm based on deep learning is designed to extract the mapping between sound and m...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9389150/ https://www.ncbi.nlm.nih.gov/pubmed/35990883 http://dx.doi.org/10.3389/fnbot.2022.911469 |
_version_ | 1784770376731983872 |
---|---|
author | Liu, Xin Ko, Young Chun |
author_facet | Liu, Xin Ko, Young Chun |
author_sort | Liu, Xin |
collection | PubMed |
description | The dance generated by the traditional music action matching and statistical mapping models is less consistent with the music itself. Moreover, new dance movements cannot be generated. A dance movement generation algorithm based on deep learning is designed to extract the mapping between sound and motion features to solve these problems. First, the sound and motion features are extracted from music and dance videos, and then, the model is built. In addition, a generator module, a discriminator module, and a self-encoder module are added to make the dance movement smoother and consistent with the music. The Pix2PixHD model is used to transform the dance pose sequence into a real version of the dance. Finally, the experiment takes the dance video on the network as the training data and trained 5,000 times. About 80% of the dance data are used as the training set and 20% as the test set. The experimental results show that Train, Valid, and Test values based on the Generator+Discriminator+Autoencoder model are 15.36, 17.19, and 19.12, respectively. The similarity between the generated dance sequence and the real dance sequence is 0.063, which shows that the proposed model can generate a dance more in line with the music. Moreover, the generated dance posture is closer to the real dance posture. The discussion has certain reference value for intelligent dance teaching, game field, cross-modal generation, and exploring the relationship between audio-visual information. |
format | Online Article Text |
id | pubmed-9389150 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-93891502022-08-20 The use of deep learning technology in dance movement generation Liu, Xin Ko, Young Chun Front Neurorobot Neuroscience The dance generated by the traditional music action matching and statistical mapping models is less consistent with the music itself. Moreover, new dance movements cannot be generated. A dance movement generation algorithm based on deep learning is designed to extract the mapping between sound and motion features to solve these problems. First, the sound and motion features are extracted from music and dance videos, and then, the model is built. In addition, a generator module, a discriminator module, and a self-encoder module are added to make the dance movement smoother and consistent with the music. The Pix2PixHD model is used to transform the dance pose sequence into a real version of the dance. Finally, the experiment takes the dance video on the network as the training data and trained 5,000 times. About 80% of the dance data are used as the training set and 20% as the test set. The experimental results show that Train, Valid, and Test values based on the Generator+Discriminator+Autoencoder model are 15.36, 17.19, and 19.12, respectively. The similarity between the generated dance sequence and the real dance sequence is 0.063, which shows that the proposed model can generate a dance more in line with the music. Moreover, the generated dance posture is closer to the real dance posture. The discussion has certain reference value for intelligent dance teaching, game field, cross-modal generation, and exploring the relationship between audio-visual information. Frontiers Media S.A. 2022-08-05 /pmc/articles/PMC9389150/ /pubmed/35990883 http://dx.doi.org/10.3389/fnbot.2022.911469 Text en Copyright © 2022 Liu and Ko. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Liu, Xin Ko, Young Chun The use of deep learning technology in dance movement generation |
title | The use of deep learning technology in dance movement generation |
title_full | The use of deep learning technology in dance movement generation |
title_fullStr | The use of deep learning technology in dance movement generation |
title_full_unstemmed | The use of deep learning technology in dance movement generation |
title_short | The use of deep learning technology in dance movement generation |
title_sort | use of deep learning technology in dance movement generation |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9389150/ https://www.ncbi.nlm.nih.gov/pubmed/35990883 http://dx.doi.org/10.3389/fnbot.2022.911469 |
work_keys_str_mv | AT liuxin theuseofdeeplearningtechnologyindancemovementgeneration AT koyoungchun theuseofdeeplearningtechnologyindancemovementgeneration AT liuxin useofdeeplearningtechnologyindancemovementgeneration AT koyoungchun useofdeeplearningtechnologyindancemovementgeneration |