Cargando…

Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions

The development of large-scale corpora has led to a quantum leap in our understanding of speech in recent years. By contrast, the analysis of massive datasets has so far had a limited impact on the study of gesture and other visual communicative behaviors. We utilized the UCLA-Red Hen Lab multi-bill...

Descripción completa

Detalles Bibliográficos
Autores principales: Pagán Cánovas, Cristóbal, Valenzuela, Javier, Alcaraz Carrión, Daniel, Olza, Inés, Ramscar, Michael
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7266323/
https://www.ncbi.nlm.nih.gov/pubmed/32484842
http://dx.doi.org/10.1371/journal.pone.0233892
_version_ 1783541285866438656
author Pagán Cánovas, Cristóbal
Valenzuela, Javier
Alcaraz Carrión, Daniel
Olza, Inés
Ramscar, Michael
author_facet Pagán Cánovas, Cristóbal
Valenzuela, Javier
Alcaraz Carrión, Daniel
Olza, Inés
Ramscar, Michael
author_sort Pagán Cánovas, Cristóbal
collection PubMed
description The development of large-scale corpora has led to a quantum leap in our understanding of speech in recent years. By contrast, the analysis of massive datasets has so far had a limited impact on the study of gesture and other visual communicative behaviors. We utilized the UCLA-Red Hen Lab multi-billion-word repository of video recordings, all of them showing communicative behavior that was not elicited in a lab, to quantify speech-gesture co-occurrence frequency for a subset of linguistic expressions in American English. First, we objectively establish a systematic relationship in the high degree of co-occurrence between gesture and speech in our subset of expressions, which consists of temporal phrases. Second, we show that there is a systematic alignment between the informativity of co-speech gestures and that of the verbal expressions with which they co-occur. By exposing deep, systematic relations between the modalities of gesture and speech, our results pave the way for the data-driven integration of multimodal behavior into our understanding of human communication.
format Online
Article
Text
id pubmed-7266323
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-72663232020-06-10 Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions Pagán Cánovas, Cristóbal Valenzuela, Javier Alcaraz Carrión, Daniel Olza, Inés Ramscar, Michael PLoS One Research Article The development of large-scale corpora has led to a quantum leap in our understanding of speech in recent years. By contrast, the analysis of massive datasets has so far had a limited impact on the study of gesture and other visual communicative behaviors. We utilized the UCLA-Red Hen Lab multi-billion-word repository of video recordings, all of them showing communicative behavior that was not elicited in a lab, to quantify speech-gesture co-occurrence frequency for a subset of linguistic expressions in American English. First, we objectively establish a systematic relationship in the high degree of co-occurrence between gesture and speech in our subset of expressions, which consists of temporal phrases. Second, we show that there is a systematic alignment between the informativity of co-speech gestures and that of the verbal expressions with which they co-occur. By exposing deep, systematic relations between the modalities of gesture and speech, our results pave the way for the data-driven integration of multimodal behavior into our understanding of human communication. Public Library of Science 2020-06-02 /pmc/articles/PMC7266323/ /pubmed/32484842 http://dx.doi.org/10.1371/journal.pone.0233892 Text en © 2020 Pagán Cánovas et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Pagán Cánovas, Cristóbal
Valenzuela, Javier
Alcaraz Carrión, Daniel
Olza, Inés
Ramscar, Michael
Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions
title Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions
title_full Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions
title_fullStr Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions
title_full_unstemmed Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions
title_short Quantifying the speech-gesture relation with massive multimodal datasets: Informativity in time expressions
title_sort quantifying the speech-gesture relation with massive multimodal datasets: informativity in time expressions
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7266323/
https://www.ncbi.nlm.nih.gov/pubmed/32484842
http://dx.doi.org/10.1371/journal.pone.0233892
work_keys_str_mv AT pagancanovascristobal quantifyingthespeechgesturerelationwithmassivemultimodaldatasetsinformativityintimeexpressions
AT valenzuelajavier quantifyingthespeechgesturerelationwithmassivemultimodaldatasetsinformativityintimeexpressions
AT alcarazcarriondaniel quantifyingthespeechgesturerelationwithmassivemultimodaldatasetsinformativityintimeexpressions
AT olzaines quantifyingthespeechgesturerelationwithmassivemultimodaldatasetsinformativityintimeexpressions
AT ramscarmichael quantifyingthespeechgesturerelationwithmassivemultimodaldatasetsinformativityintimeexpressions