Cargando…
Distributed attention beats the down-side of statistical context learning in visual search
Spatial attention can be deployed with a narrower focus to process individual items or distributed relatively broadly to process larger parts of a scene. This study investigated how focused- versus distributed-attention modes contribute to the adaptation of context-based memories that guide visual s...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
The Association for Research in Vision and Ophthalmology
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7424102/ http://dx.doi.org/10.1167/jov.20.7.4 |
_version_ | 1783570268580478976 |
---|---|
author | Zinchenko, Artyom Conci, Markus Hauser, Johannes Müller, Hermann J. Geyer, Thomas |
author_facet | Zinchenko, Artyom Conci, Markus Hauser, Johannes Müller, Hermann J. Geyer, Thomas |
author_sort | Zinchenko, Artyom |
collection | PubMed |
description | Spatial attention can be deployed with a narrower focus to process individual items or distributed relatively broadly to process larger parts of a scene. This study investigated how focused- versus distributed-attention modes contribute to the adaptation of context-based memories that guide visual search. In two experiments, participants were either required to fixate the screen center and use peripheral vision for search (“distributed attention”), or they could freely move their eyes, enabling serial scanning of the search array (“focused attention”). Both experiments consisted of an initial learning phase and a subsequent test phase. During learning, participants searched for targets presented either among repeated (invariant) or nonrepeated (randomly generated) spatial layouts of distractor items. Prior research showed that repeated encounters of invariant display arrangements lead to long-term context memory about these arrays, which can then come to guide search (contextual-cueing effect). The crucial manipulation in the test phase was a change of the target location within an otherwise constant distractor layout, which has previously been shown to abolish the cueing effect. The current results replicated these findings, although importantly only when attention was focused. By contrast, with distributed attention, the cueing effect recovered rapidly and attained a level comparable to the initial effect (before the target location change). This indicates that contextual cueing can adapt more easily when attention is distributed, likely because a broad attentional set facilitates the flexible updating of global (distractor-distractor), as compared to more local (distractor-target), context representations—allowing local changes to be incorporated more readily. |
format | Online Article Text |
id | pubmed-7424102 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | The Association for Research in Vision and Ophthalmology |
record_format | MEDLINE/PubMed |
spelling | pubmed-74241022020-08-26 Distributed attention beats the down-side of statistical context learning in visual search Zinchenko, Artyom Conci, Markus Hauser, Johannes Müller, Hermann J. Geyer, Thomas J Vis Article Spatial attention can be deployed with a narrower focus to process individual items or distributed relatively broadly to process larger parts of a scene. This study investigated how focused- versus distributed-attention modes contribute to the adaptation of context-based memories that guide visual search. In two experiments, participants were either required to fixate the screen center and use peripheral vision for search (“distributed attention”), or they could freely move their eyes, enabling serial scanning of the search array (“focused attention”). Both experiments consisted of an initial learning phase and a subsequent test phase. During learning, participants searched for targets presented either among repeated (invariant) or nonrepeated (randomly generated) spatial layouts of distractor items. Prior research showed that repeated encounters of invariant display arrangements lead to long-term context memory about these arrays, which can then come to guide search (contextual-cueing effect). The crucial manipulation in the test phase was a change of the target location within an otherwise constant distractor layout, which has previously been shown to abolish the cueing effect. The current results replicated these findings, although importantly only when attention was focused. By contrast, with distributed attention, the cueing effect recovered rapidly and attained a level comparable to the initial effect (before the target location change). This indicates that contextual cueing can adapt more easily when attention is distributed, likely because a broad attentional set facilitates the flexible updating of global (distractor-distractor), as compared to more local (distractor-target), context representations—allowing local changes to be incorporated more readily. The Association for Research in Vision and Ophthalmology 2020-07-06 /pmc/articles/PMC7424102/ http://dx.doi.org/10.1167/jov.20.7.4 Text en Copyright 2020 The Authors http://creativecommons.org/licenses/by/4.0/ This work is licensed under a Creative Commons Attribution 4.0 International License. |
spellingShingle | Article Zinchenko, Artyom Conci, Markus Hauser, Johannes Müller, Hermann J. Geyer, Thomas Distributed attention beats the down-side of statistical context learning in visual search |
title | Distributed attention beats the down-side of statistical context learning in visual search |
title_full | Distributed attention beats the down-side of statistical context learning in visual search |
title_fullStr | Distributed attention beats the down-side of statistical context learning in visual search |
title_full_unstemmed | Distributed attention beats the down-side of statistical context learning in visual search |
title_short | Distributed attention beats the down-side of statistical context learning in visual search |
title_sort | distributed attention beats the down-side of statistical context learning in visual search |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7424102/ http://dx.doi.org/10.1167/jov.20.7.4 |
work_keys_str_mv | AT zinchenkoartyom distributedattentionbeatsthedownsideofstatisticalcontextlearninginvisualsearch AT concimarkus distributedattentionbeatsthedownsideofstatisticalcontextlearninginvisualsearch AT hauserjohannes distributedattentionbeatsthedownsideofstatisticalcontextlearninginvisualsearch AT mullerhermannj distributedattentionbeatsthedownsideofstatisticalcontextlearninginvisualsearch AT geyerthomas distributedattentionbeatsthedownsideofstatisticalcontextlearninginvisualsearch |