Cargando…
Language-driven anticipatory eye movements in virtual reality
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to a...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5990548/ https://www.ncbi.nlm.nih.gov/pubmed/28791625 http://dx.doi.org/10.3758/s13428-017-0929-z |
_version_ | 1783329597127917568 |
---|---|
author | Eichert, Nicole Peeters, David Hagoort, Peter |
author_facet | Eichert, Nicole Peeters, David Hagoort, Peter |
author_sort | Eichert, Nicole |
collection | PubMed |
description | Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.3758/s13428-017-0929-z) contains supplementary material, which is available to authorized users. |
format | Online Article Text |
id | pubmed-5990548 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-59905482018-06-19 Language-driven anticipatory eye movements in virtual reality Eichert, Nicole Peeters, David Hagoort, Peter Behav Res Methods Article Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.3758/s13428-017-0929-z) contains supplementary material, which is available to authorized users. Springer US 2017-08-08 2018 /pmc/articles/PMC5990548/ /pubmed/28791625 http://dx.doi.org/10.3758/s13428-017-0929-z Text en © The Author(s) 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. |
spellingShingle | Article Eichert, Nicole Peeters, David Hagoort, Peter Language-driven anticipatory eye movements in virtual reality |
title | Language-driven anticipatory eye movements in virtual reality |
title_full | Language-driven anticipatory eye movements in virtual reality |
title_fullStr | Language-driven anticipatory eye movements in virtual reality |
title_full_unstemmed | Language-driven anticipatory eye movements in virtual reality |
title_short | Language-driven anticipatory eye movements in virtual reality |
title_sort | language-driven anticipatory eye movements in virtual reality |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5990548/ https://www.ncbi.nlm.nih.gov/pubmed/28791625 http://dx.doi.org/10.3758/s13428-017-0929-z |
work_keys_str_mv | AT eichertnicole languagedrivenanticipatoryeyemovementsinvirtualreality AT peetersdavid languagedrivenanticipatoryeyemovementsinvirtualreality AT hagoortpeter languagedrivenanticipatoryeyemovementsinvirtualreality |