Cargando…

Objects guide human gaze behavior in dynamic real-world scenes

The complexity of natural scenes makes it challenging to experimentally study the mechanisms behind human gaze behavior when viewing dynamic environments. Historically, eye movements were believed to be driven primarily by space-based attention towards locations with salient features. Increasing evi...

Descripción completa

Detalles Bibliográficos
Autores principales: Roth, Nicolas, Rolfs, Martin, Hellwich, Olaf, Obermayer, Klaus
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10602265/
https://www.ncbi.nlm.nih.gov/pubmed/37883331
http://dx.doi.org/10.1371/journal.pcbi.1011512
_version_ 1785126362318635008
author Roth, Nicolas
Rolfs, Martin
Hellwich, Olaf
Obermayer, Klaus
author_facet Roth, Nicolas
Rolfs, Martin
Hellwich, Olaf
Obermayer, Klaus
author_sort Roth, Nicolas
collection PubMed
description The complexity of natural scenes makes it challenging to experimentally study the mechanisms behind human gaze behavior when viewing dynamic environments. Historically, eye movements were believed to be driven primarily by space-based attention towards locations with salient features. Increasing evidence suggests, however, that visual attention does not select locations with high saliency but operates on attentional units given by the objects in the scene. We present a new computational framework to investigate the importance of objects for attentional guidance. This framework is designed to simulate realistic scanpaths for dynamic real-world scenes, including saccade timing and smooth pursuit behavior. Individual model components are based on psychophysically uncovered mechanisms of visual attention and saccadic decision-making. All mechanisms are implemented in a modular fashion with a small number of well-interpretable parameters. To systematically analyze the importance of objects in guiding gaze behavior, we implemented five different models within this framework: two purely spatial models, where one is based on low-level saliency and one on high-level saliency, two object-based models, with one incorporating low-level saliency for each object and the other one not using any saliency information, and a mixed model with object-based attention and selection but space-based inhibition of return. We optimized each model’s parameters to reproduce the saccade amplitude and fixation duration distributions of human scanpaths using evolutionary algorithms. We compared model performance with respect to spatial and temporal fixation behavior, including the proportion of fixations exploring the background, as well as detecting, inspecting, and returning to objects. A model with object-based attention and inhibition, which uses saliency information to prioritize between objects for saccadic selection, leads to scanpath statistics with the highest similarity to the human data. This demonstrates that scanpath models benefit from object-based attention and selection, suggesting that object-level attentional units play an important role in guiding attentional processing.
format Online
Article
Text
id pubmed-10602265
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-106022652023-10-27 Objects guide human gaze behavior in dynamic real-world scenes Roth, Nicolas Rolfs, Martin Hellwich, Olaf Obermayer, Klaus PLoS Comput Biol Research Article The complexity of natural scenes makes it challenging to experimentally study the mechanisms behind human gaze behavior when viewing dynamic environments. Historically, eye movements were believed to be driven primarily by space-based attention towards locations with salient features. Increasing evidence suggests, however, that visual attention does not select locations with high saliency but operates on attentional units given by the objects in the scene. We present a new computational framework to investigate the importance of objects for attentional guidance. This framework is designed to simulate realistic scanpaths for dynamic real-world scenes, including saccade timing and smooth pursuit behavior. Individual model components are based on psychophysically uncovered mechanisms of visual attention and saccadic decision-making. All mechanisms are implemented in a modular fashion with a small number of well-interpretable parameters. To systematically analyze the importance of objects in guiding gaze behavior, we implemented five different models within this framework: two purely spatial models, where one is based on low-level saliency and one on high-level saliency, two object-based models, with one incorporating low-level saliency for each object and the other one not using any saliency information, and a mixed model with object-based attention and selection but space-based inhibition of return. We optimized each model’s parameters to reproduce the saccade amplitude and fixation duration distributions of human scanpaths using evolutionary algorithms. We compared model performance with respect to spatial and temporal fixation behavior, including the proportion of fixations exploring the background, as well as detecting, inspecting, and returning to objects. A model with object-based attention and inhibition, which uses saliency information to prioritize between objects for saccadic selection, leads to scanpath statistics with the highest similarity to the human data. This demonstrates that scanpath models benefit from object-based attention and selection, suggesting that object-level attentional units play an important role in guiding attentional processing. Public Library of Science 2023-10-26 /pmc/articles/PMC10602265/ /pubmed/37883331 http://dx.doi.org/10.1371/journal.pcbi.1011512 Text en © 2023 Roth et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Roth, Nicolas
Rolfs, Martin
Hellwich, Olaf
Obermayer, Klaus
Objects guide human gaze behavior in dynamic real-world scenes
title Objects guide human gaze behavior in dynamic real-world scenes
title_full Objects guide human gaze behavior in dynamic real-world scenes
title_fullStr Objects guide human gaze behavior in dynamic real-world scenes
title_full_unstemmed Objects guide human gaze behavior in dynamic real-world scenes
title_short Objects guide human gaze behavior in dynamic real-world scenes
title_sort objects guide human gaze behavior in dynamic real-world scenes
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10602265/
https://www.ncbi.nlm.nih.gov/pubmed/37883331
http://dx.doi.org/10.1371/journal.pcbi.1011512
work_keys_str_mv AT rothnicolas objectsguidehumangazebehaviorindynamicrealworldscenes
AT rolfsmartin objectsguidehumangazebehaviorindynamicrealworldscenes
AT hellwicholaf objectsguidehumangazebehaviorindynamicrealworldscenes
AT obermayerklaus objectsguidehumangazebehaviorindynamicrealworldscenes