Cargando…

Learning to generate pointing gestures in situated embodied conversational agents

One of the main goals of robotics and intelligent agent research is to enable them to communicate with humans in physically situated settings. Human communication consists of both verbal and non-verbal modes. Recent studies in enabling communication for intelligent agents have focused on verbal mode...

Descripción completa

Detalles Bibliográficos
Autores principales: Deichler, Anna, Wang, Siyang, Alexanderson, Simon, Beskow, Jonas
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10097883/
https://www.ncbi.nlm.nih.gov/pubmed/37064574
http://dx.doi.org/10.3389/frobt.2023.1110534
_version_ 1785024666077757440
author Deichler, Anna
Wang, Siyang
Alexanderson, Simon
Beskow, Jonas
author_facet Deichler, Anna
Wang, Siyang
Alexanderson, Simon
Beskow, Jonas
author_sort Deichler, Anna
collection PubMed
description One of the main goals of robotics and intelligent agent research is to enable them to communicate with humans in physically situated settings. Human communication consists of both verbal and non-verbal modes. Recent studies in enabling communication for intelligent agents have focused on verbal modes, i.e., language and speech. However, in a situated setting the non-verbal mode is crucial for an agent to adapt flexible communication strategies. In this work, we focus on learning to generate non-verbal communicative expressions in situated embodied interactive agents. Specifically, we show that an agent can learn pointing gestures in a physically simulated environment through a combination of imitation and reinforcement learning that achieves high motion naturalness and high referential accuracy. We compared our proposed system against several baselines in both subjective and objective evaluations. The subjective evaluation is done in a virtual reality setting where an embodied referential game is played between the user and the agent in a shared 3D space, a setup that fully assesses the communicative capabilities of the generated gestures. The evaluations show that our model achieves a higher level of referential accuracy and motion naturalness compared to a state-of-the-art supervised learning motion synthesis model, showing the promise of our proposed system that combines imitation and reinforcement learning for generating communicative gestures. Additionally, our system is robust in a physically-simulated environment thus has the potential of being applied to robots.
format Online
Article
Text
id pubmed-10097883
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-100978832023-04-14 Learning to generate pointing gestures in situated embodied conversational agents Deichler, Anna Wang, Siyang Alexanderson, Simon Beskow, Jonas Front Robot AI Robotics and AI One of the main goals of robotics and intelligent agent research is to enable them to communicate with humans in physically situated settings. Human communication consists of both verbal and non-verbal modes. Recent studies in enabling communication for intelligent agents have focused on verbal modes, i.e., language and speech. However, in a situated setting the non-verbal mode is crucial for an agent to adapt flexible communication strategies. In this work, we focus on learning to generate non-verbal communicative expressions in situated embodied interactive agents. Specifically, we show that an agent can learn pointing gestures in a physically simulated environment through a combination of imitation and reinforcement learning that achieves high motion naturalness and high referential accuracy. We compared our proposed system against several baselines in both subjective and objective evaluations. The subjective evaluation is done in a virtual reality setting where an embodied referential game is played between the user and the agent in a shared 3D space, a setup that fully assesses the communicative capabilities of the generated gestures. The evaluations show that our model achieves a higher level of referential accuracy and motion naturalness compared to a state-of-the-art supervised learning motion synthesis model, showing the promise of our proposed system that combines imitation and reinforcement learning for generating communicative gestures. Additionally, our system is robust in a physically-simulated environment thus has the potential of being applied to robots. Frontiers Media S.A. 2023-03-30 /pmc/articles/PMC10097883/ /pubmed/37064574 http://dx.doi.org/10.3389/frobt.2023.1110534 Text en Copyright © 2023 Deichler, Wang, Alexanderson and Beskow. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Deichler, Anna
Wang, Siyang
Alexanderson, Simon
Beskow, Jonas
Learning to generate pointing gestures in situated embodied conversational agents
title Learning to generate pointing gestures in situated embodied conversational agents
title_full Learning to generate pointing gestures in situated embodied conversational agents
title_fullStr Learning to generate pointing gestures in situated embodied conversational agents
title_full_unstemmed Learning to generate pointing gestures in situated embodied conversational agents
title_short Learning to generate pointing gestures in situated embodied conversational agents
title_sort learning to generate pointing gestures in situated embodied conversational agents
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10097883/
https://www.ncbi.nlm.nih.gov/pubmed/37064574
http://dx.doi.org/10.3389/frobt.2023.1110534
work_keys_str_mv AT deichleranna learningtogeneratepointinggesturesinsituatedembodiedconversationalagents
AT wangsiyang learningtogeneratepointinggesturesinsituatedembodiedconversationalagents
AT alexandersonsimon learningtogeneratepointinggesturesinsituatedembodiedconversationalagents
AT beskowjonas learningtogeneratepointinggesturesinsituatedembodiedconversationalagents