Cargando…

Capturing Conversational Gestures for Embodied Conversational Agents Using an Optimized Kaneda–Lucas–Tomasi Tracker and Denavit–Hartenberg-Based Kinematic Model

In order to recreate viable and human-like conversational responses, the artificial entity, i.e., an embodied conversational agent, must express correlated speech (verbal) and gestures (non-verbal) responses in spoken social interaction. Most of the existing frameworks focus on intent planning and b...

Descripción completa

Detalles Bibliográficos
Autores principales: Močnik, Grega, Kačič, Zdravko, Šafarič, Riko, Mlakar, Izidor
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9656321/
https://www.ncbi.nlm.nih.gov/pubmed/36366016
http://dx.doi.org/10.3390/s22218318
_version_ 1784829404691562496
author Močnik, Grega
Kačič, Zdravko
Šafarič, Riko
Mlakar, Izidor
author_facet Močnik, Grega
Kačič, Zdravko
Šafarič, Riko
Mlakar, Izidor
author_sort Močnik, Grega
collection PubMed
description In order to recreate viable and human-like conversational responses, the artificial entity, i.e., an embodied conversational agent, must express correlated speech (verbal) and gestures (non-verbal) responses in spoken social interaction. Most of the existing frameworks focus on intent planning and behavior planning. The realization, however, is left to a limited set of static 3D representations of conversational expressions. In addition to functional and semantic synchrony between verbal and non-verbal signals, the final believability of the displayed expression is sculpted by the physical realization of non-verbal expressions. A major challenge of most conversational systems capable of reproducing gestures is the diversity in expressiveness. In this paper, we propose a method for capturing gestures automatically from videos and transforming them into 3D representations stored as part of the conversational agent’s repository of motor skills. The main advantage of the proposed method is ensuring the naturalness of the embodied conversational agent’s gestures, which results in a higher quality of human-computer interaction. The method is based on a Kanade–Lucas–Tomasi tracker, a Savitzky–Golay filter, a Denavit–Hartenberg-based kinematic model and the EVA framework. Furthermore, we designed an objective method based on cosine similarity instead of a subjective evaluation of synthesized movement. The proposed method resulted in a 96% similarity.
format Online
Article
Text
id pubmed-9656321
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-96563212022-11-15 Capturing Conversational Gestures for Embodied Conversational Agents Using an Optimized Kaneda–Lucas–Tomasi Tracker and Denavit–Hartenberg-Based Kinematic Model Močnik, Grega Kačič, Zdravko Šafarič, Riko Mlakar, Izidor Sensors (Basel) Article In order to recreate viable and human-like conversational responses, the artificial entity, i.e., an embodied conversational agent, must express correlated speech (verbal) and gestures (non-verbal) responses in spoken social interaction. Most of the existing frameworks focus on intent planning and behavior planning. The realization, however, is left to a limited set of static 3D representations of conversational expressions. In addition to functional and semantic synchrony between verbal and non-verbal signals, the final believability of the displayed expression is sculpted by the physical realization of non-verbal expressions. A major challenge of most conversational systems capable of reproducing gestures is the diversity in expressiveness. In this paper, we propose a method for capturing gestures automatically from videos and transforming them into 3D representations stored as part of the conversational agent’s repository of motor skills. The main advantage of the proposed method is ensuring the naturalness of the embodied conversational agent’s gestures, which results in a higher quality of human-computer interaction. The method is based on a Kanade–Lucas–Tomasi tracker, a Savitzky–Golay filter, a Denavit–Hartenberg-based kinematic model and the EVA framework. Furthermore, we designed an objective method based on cosine similarity instead of a subjective evaluation of synthesized movement. The proposed method resulted in a 96% similarity. MDPI 2022-10-29 /pmc/articles/PMC9656321/ /pubmed/36366016 http://dx.doi.org/10.3390/s22218318 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Močnik, Grega
Kačič, Zdravko
Šafarič, Riko
Mlakar, Izidor
Capturing Conversational Gestures for Embodied Conversational Agents Using an Optimized Kaneda–Lucas–Tomasi Tracker and Denavit–Hartenberg-Based Kinematic Model
title Capturing Conversational Gestures for Embodied Conversational Agents Using an Optimized Kaneda–Lucas–Tomasi Tracker and Denavit–Hartenberg-Based Kinematic Model
title_full Capturing Conversational Gestures for Embodied Conversational Agents Using an Optimized Kaneda–Lucas–Tomasi Tracker and Denavit–Hartenberg-Based Kinematic Model
title_fullStr Capturing Conversational Gestures for Embodied Conversational Agents Using an Optimized Kaneda–Lucas–Tomasi Tracker and Denavit–Hartenberg-Based Kinematic Model
title_full_unstemmed Capturing Conversational Gestures for Embodied Conversational Agents Using an Optimized Kaneda–Lucas–Tomasi Tracker and Denavit–Hartenberg-Based Kinematic Model
title_short Capturing Conversational Gestures for Embodied Conversational Agents Using an Optimized Kaneda–Lucas–Tomasi Tracker and Denavit–Hartenberg-Based Kinematic Model
title_sort capturing conversational gestures for embodied conversational agents using an optimized kaneda–lucas–tomasi tracker and denavit–hartenberg-based kinematic model
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9656321/
https://www.ncbi.nlm.nih.gov/pubmed/36366016
http://dx.doi.org/10.3390/s22218318
work_keys_str_mv AT mocnikgrega capturingconversationalgesturesforembodiedconversationalagentsusinganoptimizedkanedalucastomasitrackeranddenavithartenbergbasedkinematicmodel
AT kaciczdravko capturingconversationalgesturesforembodiedconversationalagentsusinganoptimizedkanedalucastomasitrackeranddenavithartenbergbasedkinematicmodel
AT safaricriko capturingconversationalgesturesforembodiedconversationalagentsusinganoptimizedkanedalucastomasitrackeranddenavithartenbergbasedkinematicmodel
AT mlakarizidor capturingconversationalgesturesforembodiedconversationalagentsusinganoptimizedkanedalucastomasitrackeranddenavithartenbergbasedkinematicmodel