Cargando…
Understanding Moment‐to‐Moment Processing of Visual Narratives
What role do moment‐to‐moment comprehension processes play in visual attentional selection in picture stories? The current work uniquely tested the role of bridging inference generation processes on eye movements while participants viewed picture stories. Specific components of the Scene Perception...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley and Sons Inc.
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6587724/ https://www.ncbi.nlm.nih.gov/pubmed/30447018 http://dx.doi.org/10.1111/cogs.12699 |
Sumario: | What role do moment‐to‐moment comprehension processes play in visual attentional selection in picture stories? The current work uniquely tested the role of bridging inference generation processes on eye movements while participants viewed picture stories. Specific components of the Scene Perception and Event Comprehension Theory (SPECT) were tested. Bridging inference generation was induced by manipulating the presence of highly inferable actions embedded in picture stories. When inferable actions are missing, participants have increased viewing times for the immediately following critical image (Magliano, Larson, Higgs, & Loschky, 2016). This study used eye‐tracking to test competing hypotheses about the increased viewing time: (a) Computational Load: inference generation processes increase overall computational load, producing longer fixation durations; (b) Visual Search: inference generation processes guide eye‐movements to pick up inference‐relevant information, producing more fixations. Participants had similar fixation durations, but they made more fixations while generating inferences, with that process starting from the fifth fixation. A follow‐up hypothesis predicted that when generating inferences, participants fixate scene regions important for generating the inference. A separate group of participants rated the inferential‐relevance of regions in the critical images, and results showed that these inferentially relevant regions predicted differences in other viewers’ eye movements. Thus, viewers’ event models in working memory affect visual attentional selection while viewing visual narratives. |
---|