Cargando…
An adaptive control framework based multi-modal information-driven dance composition model for musical robots
Currently, most robot dances are pre-compiled, the requirement of manual adjustment of relevant parameters and meta-action to change the dancing to another type of music would greatly reduce its function. To overcome the gap, this study proposed a dance composition model for mobile robots based on m...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10590936/ https://www.ncbi.nlm.nih.gov/pubmed/37876550 http://dx.doi.org/10.3389/fnbot.2023.1270652 |
_version_ | 1785124109599899648 |
---|---|
author | Xu, Fumei Xia, Yu Wu, Xiaorun |
author_facet | Xu, Fumei Xia, Yu Wu, Xiaorun |
author_sort | Xu, Fumei |
collection | PubMed |
description | Currently, most robot dances are pre-compiled, the requirement of manual adjustment of relevant parameters and meta-action to change the dancing to another type of music would greatly reduce its function. To overcome the gap, this study proposed a dance composition model for mobile robots based on multimodal information. The model consists of three parts. (1) Extraction of multimodal information. The temporal structure feature method of structure analysis framework is used to divide audio music files into music structures; then, a hierarchical emotion detection framework is used to extract information (rhythm, emotion, tension, etc.) for each segmented music structure; calculating the safety of the current car and surrounding objects in motion; finally, extracting the stage color of the robot's location, corresponding to the relevant atmosphere emotions. (2) Initialize the dance library. Dance composition is divided into four categories based on the classification of music emotions; in addition, each type of dance composition is divided into skilled composition and general dance composition. (3) The total path length can be obtained by combining multimodal information based on different emotions, initial speeds, and music structure periods; then, target point planning can be carried out based on the specific dance composition selected. An adaptive control framework based on the Cerebellar Model Articulation Controller (CMAC) and compensation controllers is used to track the target point trajectory, and finally, the selected dance composition is formed. Mobile robot dance composition provides a new method and concept for humanoid robot dance composition. |
format | Online Article Text |
id | pubmed-10590936 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-105909362023-10-24 An adaptive control framework based multi-modal information-driven dance composition model for musical robots Xu, Fumei Xia, Yu Wu, Xiaorun Front Neurorobot Neuroscience Currently, most robot dances are pre-compiled, the requirement of manual adjustment of relevant parameters and meta-action to change the dancing to another type of music would greatly reduce its function. To overcome the gap, this study proposed a dance composition model for mobile robots based on multimodal information. The model consists of three parts. (1) Extraction of multimodal information. The temporal structure feature method of structure analysis framework is used to divide audio music files into music structures; then, a hierarchical emotion detection framework is used to extract information (rhythm, emotion, tension, etc.) for each segmented music structure; calculating the safety of the current car and surrounding objects in motion; finally, extracting the stage color of the robot's location, corresponding to the relevant atmosphere emotions. (2) Initialize the dance library. Dance composition is divided into four categories based on the classification of music emotions; in addition, each type of dance composition is divided into skilled composition and general dance composition. (3) The total path length can be obtained by combining multimodal information based on different emotions, initial speeds, and music structure periods; then, target point planning can be carried out based on the specific dance composition selected. An adaptive control framework based on the Cerebellar Model Articulation Controller (CMAC) and compensation controllers is used to track the target point trajectory, and finally, the selected dance composition is formed. Mobile robot dance composition provides a new method and concept for humanoid robot dance composition. Frontiers Media S.A. 2023-10-09 /pmc/articles/PMC10590936/ /pubmed/37876550 http://dx.doi.org/10.3389/fnbot.2023.1270652 Text en Copyright © 2023 Xu, Xia and Wu. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Xu, Fumei Xia, Yu Wu, Xiaorun An adaptive control framework based multi-modal information-driven dance composition model for musical robots |
title | An adaptive control framework based multi-modal information-driven dance composition model for musical robots |
title_full | An adaptive control framework based multi-modal information-driven dance composition model for musical robots |
title_fullStr | An adaptive control framework based multi-modal information-driven dance composition model for musical robots |
title_full_unstemmed | An adaptive control framework based multi-modal information-driven dance composition model for musical robots |
title_short | An adaptive control framework based multi-modal information-driven dance composition model for musical robots |
title_sort | adaptive control framework based multi-modal information-driven dance composition model for musical robots |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10590936/ https://www.ncbi.nlm.nih.gov/pubmed/37876550 http://dx.doi.org/10.3389/fnbot.2023.1270652 |
work_keys_str_mv | AT xufumei anadaptivecontrolframeworkbasedmultimodalinformationdrivendancecompositionmodelformusicalrobots AT xiayu anadaptivecontrolframeworkbasedmultimodalinformationdrivendancecompositionmodelformusicalrobots AT wuxiaorun anadaptivecontrolframeworkbasedmultimodalinformationdrivendancecompositionmodelformusicalrobots AT xufumei adaptivecontrolframeworkbasedmultimodalinformationdrivendancecompositionmodelformusicalrobots AT xiayu adaptivecontrolframeworkbasedmultimodalinformationdrivendancecompositionmodelformusicalrobots AT wuxiaorun adaptivecontrolframeworkbasedmultimodalinformationdrivendancecompositionmodelformusicalrobots |