Cargando…

Concurrent talking in immersive virtual reality: on the dominance of visual speech cues

Humans are good at selectively listening to specific target conversations, even in the presence of multiple concurrent speakers. In our research, we study how auditory-visual cues modulate this selective listening. We do so by using immersive Virtual Reality technologies with spatialized audio. Expo...

Descripción completa

Detalles Bibliográficos
Autores principales: Gonzalez-Franco, Mar, Maselli, Antonella, Florencio, Dinei, Smolyanskiy, Nikolai, Zhang, Zhengyou
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5476615/
https://www.ncbi.nlm.nih.gov/pubmed/28630450
http://dx.doi.org/10.1038/s41598-017-04201-x
_version_ 1783244622541094912
author Gonzalez-Franco, Mar
Maselli, Antonella
Florencio, Dinei
Smolyanskiy, Nikolai
Zhang, Zhengyou
author_facet Gonzalez-Franco, Mar
Maselli, Antonella
Florencio, Dinei
Smolyanskiy, Nikolai
Zhang, Zhengyou
author_sort Gonzalez-Franco, Mar
collection PubMed
description Humans are good at selectively listening to specific target conversations, even in the presence of multiple concurrent speakers. In our research, we study how auditory-visual cues modulate this selective listening. We do so by using immersive Virtual Reality technologies with spatialized audio. Exposing 32 participants to an Information Masking Task with concurrent speakers, we find significantly more errors in the decision-making processes triggered by asynchronous audiovisual speech cues. More precisely, the results show that lips on the Target speaker matched to a secondary (Mask) speaker’s audio severely increase the participants’ comprehension error rates. In a control experiment (n = 20), we further explore the influences of the visual modality over auditory selective attention. The results show a dominance of visual-speech cues, which effectively turn the Mask into the Target and vice-versa. These results reveal a disruption of selective attention that is triggered by bottom-up multisensory integration. The findings are framed in the sensory perception and cognitive neuroscience theories. The VR setup is validated by replicating previous results in this literature in a supplementary experiment.
format Online
Article
Text
id pubmed-5476615
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-54766152017-06-23 Concurrent talking in immersive virtual reality: on the dominance of visual speech cues Gonzalez-Franco, Mar Maselli, Antonella Florencio, Dinei Smolyanskiy, Nikolai Zhang, Zhengyou Sci Rep Article Humans are good at selectively listening to specific target conversations, even in the presence of multiple concurrent speakers. In our research, we study how auditory-visual cues modulate this selective listening. We do so by using immersive Virtual Reality technologies with spatialized audio. Exposing 32 participants to an Information Masking Task with concurrent speakers, we find significantly more errors in the decision-making processes triggered by asynchronous audiovisual speech cues. More precisely, the results show that lips on the Target speaker matched to a secondary (Mask) speaker’s audio severely increase the participants’ comprehension error rates. In a control experiment (n = 20), we further explore the influences of the visual modality over auditory selective attention. The results show a dominance of visual-speech cues, which effectively turn the Mask into the Target and vice-versa. These results reveal a disruption of selective attention that is triggered by bottom-up multisensory integration. The findings are framed in the sensory perception and cognitive neuroscience theories. The VR setup is validated by replicating previous results in this literature in a supplementary experiment. Nature Publishing Group UK 2017-06-19 /pmc/articles/PMC5476615/ /pubmed/28630450 http://dx.doi.org/10.1038/s41598-017-04201-x Text en © The Author(s) 2017 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Gonzalez-Franco, Mar
Maselli, Antonella
Florencio, Dinei
Smolyanskiy, Nikolai
Zhang, Zhengyou
Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
title Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
title_full Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
title_fullStr Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
title_full_unstemmed Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
title_short Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
title_sort concurrent talking in immersive virtual reality: on the dominance of visual speech cues
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5476615/
https://www.ncbi.nlm.nih.gov/pubmed/28630450
http://dx.doi.org/10.1038/s41598-017-04201-x
work_keys_str_mv AT gonzalezfrancomar concurrenttalkinginimmersivevirtualrealityonthedominanceofvisualspeechcues
AT maselliantonella concurrenttalkinginimmersivevirtualrealityonthedominanceofvisualspeechcues
AT florenciodinei concurrenttalkinginimmersivevirtualrealityonthedominanceofvisualspeechcues
AT smolyanskiynikolai concurrenttalkinginimmersivevirtualrealityonthedominanceofvisualspeechcues
AT zhangzhengyou concurrenttalkinginimmersivevirtualrealityonthedominanceofvisualspeechcues