Cargando…
Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding
Modeling virtual agents with behavior style is one factor for personalizing human-agent interaction. We propose an efficient yet effective machine learning approach to synthesize gestures driven by prosodic features and text in the style of different speakers including those unseen during training....
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10291316/ https://www.ncbi.nlm.nih.gov/pubmed/37377638 http://dx.doi.org/10.3389/frai.2023.1142997 |
_version_ | 1785062668690784256 |
---|---|
author | Fares, Mireille Pelachaud, Catherine Obin, Nicolas |
author_facet | Fares, Mireille Pelachaud, Catherine Obin, Nicolas |
author_sort | Fares, Mireille |
collection | PubMed |
description | Modeling virtual agents with behavior style is one factor for personalizing human-agent interaction. We propose an efficient yet effective machine learning approach to synthesize gestures driven by prosodic features and text in the style of different speakers including those unseen during training. Our model performs zero-shot multimodal style transfer driven by multimodal data from the PATS database containing videos of various speakers. We view style as being pervasive; while speaking, it colors the communicative behaviors expressivity while speech content is carried by multimodal signals and text. This disentanglement scheme of content and style allows us to directly infer the style embedding even of a speaker whose data are not part of the training phase, without requiring any further training or fine-tuning. The first goal of our model is to generate the gestures of a source speaker based on the content of two input modalities–Mel spectrogram and text semantics. The second goal is to condition the source speaker's predicted gestures on the multimodal behavior style embedding of a target speaker. The third goal is to allow zero-shot style transfer of speakers unseen during training without re-training the model. Our system consists of two main components: (1) a speaker style encoder network that learns to generate a fixed-dimensional speaker embedding style from a target speaker multimodal data (mel-spectrogram, pose, and text) and (2) a sequence-to-sequence synthesis network that synthesizes gestures based on the content of the input modalities—text and mel-spectrogram—of a source speaker and conditioned on the speaker style embedding. We evaluate that our model is able to synthesize gestures of a source speaker given the two input modalities and transfer the knowledge of target speaker style variability learned by the speaker style encoder to the gesture generation task in a zero-shot setup, indicating that the model has learned a high-quality speaker representation. We conduct objective and subjective evaluations to validate our approach and compare it with baselines. |
format | Online Article Text |
id | pubmed-10291316 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-102913162023-06-27 Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding Fares, Mireille Pelachaud, Catherine Obin, Nicolas Front Artif Intell Artificial Intelligence Modeling virtual agents with behavior style is one factor for personalizing human-agent interaction. We propose an efficient yet effective machine learning approach to synthesize gestures driven by prosodic features and text in the style of different speakers including those unseen during training. Our model performs zero-shot multimodal style transfer driven by multimodal data from the PATS database containing videos of various speakers. We view style as being pervasive; while speaking, it colors the communicative behaviors expressivity while speech content is carried by multimodal signals and text. This disentanglement scheme of content and style allows us to directly infer the style embedding even of a speaker whose data are not part of the training phase, without requiring any further training or fine-tuning. The first goal of our model is to generate the gestures of a source speaker based on the content of two input modalities–Mel spectrogram and text semantics. The second goal is to condition the source speaker's predicted gestures on the multimodal behavior style embedding of a target speaker. The third goal is to allow zero-shot style transfer of speakers unseen during training without re-training the model. Our system consists of two main components: (1) a speaker style encoder network that learns to generate a fixed-dimensional speaker embedding style from a target speaker multimodal data (mel-spectrogram, pose, and text) and (2) a sequence-to-sequence synthesis network that synthesizes gestures based on the content of the input modalities—text and mel-spectrogram—of a source speaker and conditioned on the speaker style embedding. We evaluate that our model is able to synthesize gestures of a source speaker given the two input modalities and transfer the knowledge of target speaker style variability learned by the speaker style encoder to the gesture generation task in a zero-shot setup, indicating that the model has learned a high-quality speaker representation. We conduct objective and subjective evaluations to validate our approach and compare it with baselines. Frontiers Media S.A. 2023-06-12 /pmc/articles/PMC10291316/ /pubmed/37377638 http://dx.doi.org/10.3389/frai.2023.1142997 Text en Copyright © 2023 Fares, Pelachaud and Obin. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Artificial Intelligence Fares, Mireille Pelachaud, Catherine Obin, Nicolas Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding |
title | Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding |
title_full | Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding |
title_fullStr | Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding |
title_full_unstemmed | Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding |
title_short | Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding |
title_sort | zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10291316/ https://www.ncbi.nlm.nih.gov/pubmed/37377638 http://dx.doi.org/10.3389/frai.2023.1142997 |
work_keys_str_mv | AT faresmireille zeroshotstyletransferforgestureanimationdrivenbytextandspeechusingadversarialdisentanglementofmultimodalstyleencoding AT pelachaudcatherine zeroshotstyletransferforgestureanimationdrivenbytextandspeechusingadversarialdisentanglementofmultimodalstyleencoding AT obinnicolas zeroshotstyletransferforgestureanimationdrivenbytextandspeechusingadversarialdisentanglementofmultimodalstyleencoding |