Cargando…
Multi-dimensional fusion: transformer and GANs-based multimodal audiovisual perception robot for musical performance art
INTRODUCTION: In the context of evolving societal preferences for deeper emotional connections in art, this paper explores the emergence of multimodal robot music performance art. It investigates the fusion of music and motion in robot performances to enhance expressiveness and emotional impact. The...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10570463/ https://www.ncbi.nlm.nih.gov/pubmed/37841080 http://dx.doi.org/10.3389/fnbot.2023.1281944 |
_version_ | 1785119773643767808 |
---|---|
author | Lu, Shiyi Wang, Panpan |
author_facet | Lu, Shiyi Wang, Panpan |
author_sort | Lu, Shiyi |
collection | PubMed |
description | INTRODUCTION: In the context of evolving societal preferences for deeper emotional connections in art, this paper explores the emergence of multimodal robot music performance art. It investigates the fusion of music and motion in robot performances to enhance expressiveness and emotional impact. The study employs Transformer models to combine audio and video signals, enabling robots to better understand music's rhythm, melody, and emotional content. Generative Adversarial Networks (GANs) are utilized to create lifelike visual performances synchronized with music, bridging auditory and visual perception. Multimodal reinforcement learning is employed to achieve harmonious alignment between sound and motion. METHODS: The study leverages Transformer models to process audio and video signals in robot performances. Generative Adversarial Networks are employed to generate visually appealing performances that align with the musical input. Multimodal reinforcement learning is used to synchronize robot actions with music. Diverse music styles and emotions are considered in the experiments. Performance evaluation metrics include accuracy, recall rate, and F1 score. RESULTS: The proposed approach yields promising results across various music styles and emotional contexts. Performance smoothness scores exceed 94 points, demonstrating the fluidity of robot actions. An accuracy rate of 95% highlights the precision of the system in aligning robot actions with music. Notably, there is a substantial 33% enhancement in performance recall rate compared to baseline modules. The collective improvement in F1 score emphasizes the advantages of the proposed approach in the realm of robot music performance art. DISCUSSION: The study's findings demonstrate the potential of multimodal robot music performance art in achieving heightened emotional impact. By combining audio and visual cues, robots can better interpret and respond to music, resulting in smoother and more precise performances. The substantial improvement in recall rate suggests that the proposed approach enhances the robots' ability to accurately mirror the emotional nuances of the music. These results signify the potential of this approach to transform the landscape of artistic expression through robotics, opening new avenues for emotionally resonant performances. |
format | Online Article Text |
id | pubmed-10570463 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-105704632023-10-14 Multi-dimensional fusion: transformer and GANs-based multimodal audiovisual perception robot for musical performance art Lu, Shiyi Wang, Panpan Front Neurorobot Neuroscience INTRODUCTION: In the context of evolving societal preferences for deeper emotional connections in art, this paper explores the emergence of multimodal robot music performance art. It investigates the fusion of music and motion in robot performances to enhance expressiveness and emotional impact. The study employs Transformer models to combine audio and video signals, enabling robots to better understand music's rhythm, melody, and emotional content. Generative Adversarial Networks (GANs) are utilized to create lifelike visual performances synchronized with music, bridging auditory and visual perception. Multimodal reinforcement learning is employed to achieve harmonious alignment between sound and motion. METHODS: The study leverages Transformer models to process audio and video signals in robot performances. Generative Adversarial Networks are employed to generate visually appealing performances that align with the musical input. Multimodal reinforcement learning is used to synchronize robot actions with music. Diverse music styles and emotions are considered in the experiments. Performance evaluation metrics include accuracy, recall rate, and F1 score. RESULTS: The proposed approach yields promising results across various music styles and emotional contexts. Performance smoothness scores exceed 94 points, demonstrating the fluidity of robot actions. An accuracy rate of 95% highlights the precision of the system in aligning robot actions with music. Notably, there is a substantial 33% enhancement in performance recall rate compared to baseline modules. The collective improvement in F1 score emphasizes the advantages of the proposed approach in the realm of robot music performance art. DISCUSSION: The study's findings demonstrate the potential of multimodal robot music performance art in achieving heightened emotional impact. By combining audio and visual cues, robots can better interpret and respond to music, resulting in smoother and more precise performances. The substantial improvement in recall rate suggests that the proposed approach enhances the robots' ability to accurately mirror the emotional nuances of the music. These results signify the potential of this approach to transform the landscape of artistic expression through robotics, opening new avenues for emotionally resonant performances. Frontiers Media S.A. 2023-09-29 /pmc/articles/PMC10570463/ /pubmed/37841080 http://dx.doi.org/10.3389/fnbot.2023.1281944 Text en Copyright © 2023 Lu and Wang. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Lu, Shiyi Wang, Panpan Multi-dimensional fusion: transformer and GANs-based multimodal audiovisual perception robot for musical performance art |
title | Multi-dimensional fusion: transformer and GANs-based multimodal audiovisual perception robot for musical performance art |
title_full | Multi-dimensional fusion: transformer and GANs-based multimodal audiovisual perception robot for musical performance art |
title_fullStr | Multi-dimensional fusion: transformer and GANs-based multimodal audiovisual perception robot for musical performance art |
title_full_unstemmed | Multi-dimensional fusion: transformer and GANs-based multimodal audiovisual perception robot for musical performance art |
title_short | Multi-dimensional fusion: transformer and GANs-based multimodal audiovisual perception robot for musical performance art |
title_sort | multi-dimensional fusion: transformer and gans-based multimodal audiovisual perception robot for musical performance art |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10570463/ https://www.ncbi.nlm.nih.gov/pubmed/37841080 http://dx.doi.org/10.3389/fnbot.2023.1281944 |
work_keys_str_mv | AT lushiyi multidimensionalfusiontransformerandgansbasedmultimodalaudiovisualperceptionrobotformusicalperformanceart AT wangpanpan multidimensionalfusiontransformerandgansbasedmultimodalaudiovisualperceptionrobotformusicalperformanceart |