Cargando…
Real-Time Stylized Humanoid Behavior Control through Interaction and Synchronization
Restricted by the diversity and complexity of human behaviors, simulating a character to achieve human-level perception and motion control is still an active as well as a challenging area. We present a style-based teleoperation framework with the help of human perceptions and analyses to understand...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8874833/ https://www.ncbi.nlm.nih.gov/pubmed/35214364 http://dx.doi.org/10.3390/s22041457 |
_version_ | 1784657783181803520 |
---|---|
author | Cao, Zhiyan Bao, Tianxu Ren, Zeyu Fan, Yunxin Deng, Ken Jia, Wenchuan |
author_facet | Cao, Zhiyan Bao, Tianxu Ren, Zeyu Fan, Yunxin Deng, Ken Jia, Wenchuan |
author_sort | Cao, Zhiyan |
collection | PubMed |
description | Restricted by the diversity and complexity of human behaviors, simulating a character to achieve human-level perception and motion control is still an active as well as a challenging area. We present a style-based teleoperation framework with the help of human perceptions and analyses to understand the tasks being handled and the unknown environment to control the character. In this framework, the motion optimization and body controller with center-of-mass and root virtual control (CR-VC) method are designed to achieve motion synchronization and style mimicking while maintaining the balance of the character. The motion optimization synthesizes the human high-level style features with the balance strategy to create a feasible, stylized, and stable pose for the character. The CR-VC method including the model-based torque compensation synchronizes the motion rhythm of the human and character. Without any inverse dynamics knowledge or offline preprocessing, our framework is generalized to various scenarios and robust to human behavior changes in real-time. We demonstrate the effectiveness of this framework through the teleoperation experiments with different tasks, motion styles, and operators. This study is a step toward building a human-robot interaction that uses humans to help characters understand and achieve the tasks. |
format | Online Article Text |
id | pubmed-8874833 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-88748332022-02-26 Real-Time Stylized Humanoid Behavior Control through Interaction and Synchronization Cao, Zhiyan Bao, Tianxu Ren, Zeyu Fan, Yunxin Deng, Ken Jia, Wenchuan Sensors (Basel) Article Restricted by the diversity and complexity of human behaviors, simulating a character to achieve human-level perception and motion control is still an active as well as a challenging area. We present a style-based teleoperation framework with the help of human perceptions and analyses to understand the tasks being handled and the unknown environment to control the character. In this framework, the motion optimization and body controller with center-of-mass and root virtual control (CR-VC) method are designed to achieve motion synchronization and style mimicking while maintaining the balance of the character. The motion optimization synthesizes the human high-level style features with the balance strategy to create a feasible, stylized, and stable pose for the character. The CR-VC method including the model-based torque compensation synchronizes the motion rhythm of the human and character. Without any inverse dynamics knowledge or offline preprocessing, our framework is generalized to various scenarios and robust to human behavior changes in real-time. We demonstrate the effectiveness of this framework through the teleoperation experiments with different tasks, motion styles, and operators. This study is a step toward building a human-robot interaction that uses humans to help characters understand and achieve the tasks. MDPI 2022-02-14 /pmc/articles/PMC8874833/ /pubmed/35214364 http://dx.doi.org/10.3390/s22041457 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Cao, Zhiyan Bao, Tianxu Ren, Zeyu Fan, Yunxin Deng, Ken Jia, Wenchuan Real-Time Stylized Humanoid Behavior Control through Interaction and Synchronization |
title | Real-Time Stylized Humanoid Behavior Control through Interaction and Synchronization |
title_full | Real-Time Stylized Humanoid Behavior Control through Interaction and Synchronization |
title_fullStr | Real-Time Stylized Humanoid Behavior Control through Interaction and Synchronization |
title_full_unstemmed | Real-Time Stylized Humanoid Behavior Control through Interaction and Synchronization |
title_short | Real-Time Stylized Humanoid Behavior Control through Interaction and Synchronization |
title_sort | real-time stylized humanoid behavior control through interaction and synchronization |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8874833/ https://www.ncbi.nlm.nih.gov/pubmed/35214364 http://dx.doi.org/10.3390/s22041457 |
work_keys_str_mv | AT caozhiyan realtimestylizedhumanoidbehaviorcontrolthroughinteractionandsynchronization AT baotianxu realtimestylizedhumanoidbehaviorcontrolthroughinteractionandsynchronization AT renzeyu realtimestylizedhumanoidbehaviorcontrolthroughinteractionandsynchronization AT fanyunxin realtimestylizedhumanoidbehaviorcontrolthroughinteractionandsynchronization AT dengken realtimestylizedhumanoidbehaviorcontrolthroughinteractionandsynchronization AT jiawenchuan realtimestylizedhumanoidbehaviorcontrolthroughinteractionandsynchronization |