Cargando…

Synchronization in Interpersonal Speech

During both positive and negative dyadic exchanges, individuals will often unconsciously imitate their partner. A substantial amount of research has been made on this phenomenon, and such studies have shown that synchronization between communication partners can improve interpersonal relationships....

Descripción completa

Detalles Bibliográficos
Autores principales: Amiriparian, Shahin, Han, Jing, Schmitt, Maximilian, Baird, Alice, Mallol-Ragolta, Adria, Milling, Manuel, Gerczuk, Maurice, Schuller, Björn
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7806071/
https://www.ncbi.nlm.nih.gov/pubmed/33501131
http://dx.doi.org/10.3389/frobt.2019.00116
_version_ 1783636449654996992
author Amiriparian, Shahin
Han, Jing
Schmitt, Maximilian
Baird, Alice
Mallol-Ragolta, Adria
Milling, Manuel
Gerczuk, Maurice
Schuller, Björn
author_facet Amiriparian, Shahin
Han, Jing
Schmitt, Maximilian
Baird, Alice
Mallol-Ragolta, Adria
Milling, Manuel
Gerczuk, Maurice
Schuller, Björn
author_sort Amiriparian, Shahin
collection PubMed
description During both positive and negative dyadic exchanges, individuals will often unconsciously imitate their partner. A substantial amount of research has been made on this phenomenon, and such studies have shown that synchronization between communication partners can improve interpersonal relationships. Automatic computational approaches for recognizing synchrony are still in their infancy. In this study, we extend on previous work in which we applied a novel method utilizing hand-crafted low-level acoustic descriptors and autoencoders (AEs) to analyse synchrony in the speech domain. For this purpose, a database consisting of 394 in-the-wild speakers from six different cultures, is used. For each speaker in the dyadic exchange, two AEs are implemented. Post the training phase, the acoustic features for one of the speakers is tested using the AE trained on their dyadic partner. In this same way, we also explore the benefits that deep representations from audio may have, implementing the state-of-the-art Deep Spectrum toolkit. For all speakers at varied time-points during their interaction, the calculation of reconstruction error from the AE trained on their respective dyadic partner is made. The results obtained from this acoustic analysis are then compared with the linguistic experiments based on word counts and word embeddings generated by our word2vec approach. The results demonstrate that there is a degree of synchrony during all interactions. We also find that, this degree varies across the 6 cultures found in the investigated database. These findings are further substantiated through the use of 4,096 dimensional Deep Spectrum features.
format Online
Article
Text
id pubmed-7806071
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78060712021-01-25 Synchronization in Interpersonal Speech Amiriparian, Shahin Han, Jing Schmitt, Maximilian Baird, Alice Mallol-Ragolta, Adria Milling, Manuel Gerczuk, Maurice Schuller, Björn Front Robot AI Robotics and AI During both positive and negative dyadic exchanges, individuals will often unconsciously imitate their partner. A substantial amount of research has been made on this phenomenon, and such studies have shown that synchronization between communication partners can improve interpersonal relationships. Automatic computational approaches for recognizing synchrony are still in their infancy. In this study, we extend on previous work in which we applied a novel method utilizing hand-crafted low-level acoustic descriptors and autoencoders (AEs) to analyse synchrony in the speech domain. For this purpose, a database consisting of 394 in-the-wild speakers from six different cultures, is used. For each speaker in the dyadic exchange, two AEs are implemented. Post the training phase, the acoustic features for one of the speakers is tested using the AE trained on their dyadic partner. In this same way, we also explore the benefits that deep representations from audio may have, implementing the state-of-the-art Deep Spectrum toolkit. For all speakers at varied time-points during their interaction, the calculation of reconstruction error from the AE trained on their respective dyadic partner is made. The results obtained from this acoustic analysis are then compared with the linguistic experiments based on word counts and word embeddings generated by our word2vec approach. The results demonstrate that there is a degree of synchrony during all interactions. We also find that, this degree varies across the 6 cultures found in the investigated database. These findings are further substantiated through the use of 4,096 dimensional Deep Spectrum features. Frontiers Media S.A. 2019-11-08 /pmc/articles/PMC7806071/ /pubmed/33501131 http://dx.doi.org/10.3389/frobt.2019.00116 Text en Copyright © 2019 Amiriparian, Han, Schmitt, Baird, Mallol-Ragolta, Milling, Gerczuk and Schuller. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Amiriparian, Shahin
Han, Jing
Schmitt, Maximilian
Baird, Alice
Mallol-Ragolta, Adria
Milling, Manuel
Gerczuk, Maurice
Schuller, Björn
Synchronization in Interpersonal Speech
title Synchronization in Interpersonal Speech
title_full Synchronization in Interpersonal Speech
title_fullStr Synchronization in Interpersonal Speech
title_full_unstemmed Synchronization in Interpersonal Speech
title_short Synchronization in Interpersonal Speech
title_sort synchronization in interpersonal speech
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7806071/
https://www.ncbi.nlm.nih.gov/pubmed/33501131
http://dx.doi.org/10.3389/frobt.2019.00116
work_keys_str_mv AT amiriparianshahin synchronizationininterpersonalspeech
AT hanjing synchronizationininterpersonalspeech
AT schmittmaximilian synchronizationininterpersonalspeech
AT bairdalice synchronizationininterpersonalspeech
AT mallolragoltaadria synchronizationininterpersonalspeech
AT millingmanuel synchronizationininterpersonalspeech
AT gerczukmaurice synchronizationininterpersonalspeech
AT schullerbjorn synchronizationininterpersonalspeech