Cargando…

Understanding Moment‐to‐Moment Processing of Visual Narratives

What role do moment‐to‐moment comprehension processes play in visual attentional selection in picture stories? The current work uniquely tested the role of bridging inference generation processes on eye movements while participants viewed picture stories. Specific components of the Scene Perception...

Descripción completa

Detalles Bibliográficos
Autores principales: Hutson, John P., Magliano, Joseph P., Loschky, Lester C.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6587724/
https://www.ncbi.nlm.nih.gov/pubmed/30447018
http://dx.doi.org/10.1111/cogs.12699
_version_ 1783429124168089600
author Hutson, John P.
Magliano, Joseph P.
Loschky, Lester C.
author_facet Hutson, John P.
Magliano, Joseph P.
Loschky, Lester C.
author_sort Hutson, John P.
collection PubMed
description What role do moment‐to‐moment comprehension processes play in visual attentional selection in picture stories? The current work uniquely tested the role of bridging inference generation processes on eye movements while participants viewed picture stories. Specific components of the Scene Perception and Event Comprehension Theory (SPECT) were tested. Bridging inference generation was induced by manipulating the presence of highly inferable actions embedded in picture stories. When inferable actions are missing, participants have increased viewing times for the immediately following critical image (Magliano, Larson, Higgs, & Loschky, 2016). This study used eye‐tracking to test competing hypotheses about the increased viewing time: (a) Computational Load: inference generation processes increase overall computational load, producing longer fixation durations; (b) Visual Search: inference generation processes guide eye‐movements to pick up inference‐relevant information, producing more fixations. Participants had similar fixation durations, but they made more fixations while generating inferences, with that process starting from the fifth fixation. A follow‐up hypothesis predicted that when generating inferences, participants fixate scene regions important for generating the inference. A separate group of participants rated the inferential‐relevance of regions in the critical images, and results showed that these inferentially relevant regions predicted differences in other viewers’ eye movements. Thus, viewers’ event models in working memory affect visual attentional selection while viewing visual narratives.
format Online
Article
Text
id pubmed-6587724
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-65877242019-07-02 Understanding Moment‐to‐Moment Processing of Visual Narratives Hutson, John P. Magliano, Joseph P. Loschky, Lester C. Cogn Sci Regular Articles What role do moment‐to‐moment comprehension processes play in visual attentional selection in picture stories? The current work uniquely tested the role of bridging inference generation processes on eye movements while participants viewed picture stories. Specific components of the Scene Perception and Event Comprehension Theory (SPECT) were tested. Bridging inference generation was induced by manipulating the presence of highly inferable actions embedded in picture stories. When inferable actions are missing, participants have increased viewing times for the immediately following critical image (Magliano, Larson, Higgs, & Loschky, 2016). This study used eye‐tracking to test competing hypotheses about the increased viewing time: (a) Computational Load: inference generation processes increase overall computational load, producing longer fixation durations; (b) Visual Search: inference generation processes guide eye‐movements to pick up inference‐relevant information, producing more fixations. Participants had similar fixation durations, but they made more fixations while generating inferences, with that process starting from the fifth fixation. A follow‐up hypothesis predicted that when generating inferences, participants fixate scene regions important for generating the inference. A separate group of participants rated the inferential‐relevance of regions in the critical images, and results showed that these inferentially relevant regions predicted differences in other viewers’ eye movements. Thus, viewers’ event models in working memory affect visual attentional selection while viewing visual narratives. John Wiley and Sons Inc. 2018-11-16 2018-11 /pmc/articles/PMC6587724/ /pubmed/30447018 http://dx.doi.org/10.1111/cogs.12699 Text en © 2018 The Authors Cognitive Science ‐ A Multidisciplinary Journal published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society (CSS). This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Regular Articles
Hutson, John P.
Magliano, Joseph P.
Loschky, Lester C.
Understanding Moment‐to‐Moment Processing of Visual Narratives
title Understanding Moment‐to‐Moment Processing of Visual Narratives
title_full Understanding Moment‐to‐Moment Processing of Visual Narratives
title_fullStr Understanding Moment‐to‐Moment Processing of Visual Narratives
title_full_unstemmed Understanding Moment‐to‐Moment Processing of Visual Narratives
title_short Understanding Moment‐to‐Moment Processing of Visual Narratives
title_sort understanding moment‐to‐moment processing of visual narratives
topic Regular Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6587724/
https://www.ncbi.nlm.nih.gov/pubmed/30447018
http://dx.doi.org/10.1111/cogs.12699
work_keys_str_mv AT hutsonjohnp understandingmomenttomomentprocessingofvisualnarratives
AT maglianojosephp understandingmomenttomomentprocessingofvisualnarratives
AT loschkylesterc understandingmomenttomomentprocessingofvisualnarratives