Cargando…
Gravitational models explain shifts on human visual attention
Visual attention refers to the human brain’s ability to select relevant sensory information for preferential processing, improving performance in visual and cognitive tasks. It proceeds in two phases. One in which visual feature maps are acquired and processed in parallel. Another where the informat...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7530662/ https://www.ncbi.nlm.nih.gov/pubmed/33005008 http://dx.doi.org/10.1038/s41598-020-73494-2 |
_version_ | 1783589611373592576 |
---|---|
author | Zanca, Dario Gori, Marco Melacci, Stefano Rufa, Alessandra |
author_facet | Zanca, Dario Gori, Marco Melacci, Stefano Rufa, Alessandra |
author_sort | Zanca, Dario |
collection | PubMed |
description | Visual attention refers to the human brain’s ability to select relevant sensory information for preferential processing, improving performance in visual and cognitive tasks. It proceeds in two phases. One in which visual feature maps are acquired and processed in parallel. Another where the information from these maps is merged in order to select a single location to be attended for further and more complex computations and reasoning. Its computational description is challenging, especially if the temporal dynamics of the process are taken into account. Numerous methods to estimate saliency have been proposed in the last 3 decades. They achieve almost perfect performance in estimating saliency at the pixel level, but the way they generate shifts in visual attention fully depends on winner-take-all (WTA) circuitry. WTA is implemented by the biological hardware in order to select a location with maximum saliency, towards which to direct overt attention. In this paper we propose a gravitational model to describe the attentional shifts. Every single feature acts as an attractor and the shifts are the result of the joint effects of the attractors. In the current framework, the assumption of a single, centralized saliency map is no longer necessary, though still plausible. Quantitative results on two large image datasets show that this model predicts shifts more accurately than winner-take-all. |
format | Online Article Text |
id | pubmed-7530662 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-75306622020-10-02 Gravitational models explain shifts on human visual attention Zanca, Dario Gori, Marco Melacci, Stefano Rufa, Alessandra Sci Rep Article Visual attention refers to the human brain’s ability to select relevant sensory information for preferential processing, improving performance in visual and cognitive tasks. It proceeds in two phases. One in which visual feature maps are acquired and processed in parallel. Another where the information from these maps is merged in order to select a single location to be attended for further and more complex computations and reasoning. Its computational description is challenging, especially if the temporal dynamics of the process are taken into account. Numerous methods to estimate saliency have been proposed in the last 3 decades. They achieve almost perfect performance in estimating saliency at the pixel level, but the way they generate shifts in visual attention fully depends on winner-take-all (WTA) circuitry. WTA is implemented by the biological hardware in order to select a location with maximum saliency, towards which to direct overt attention. In this paper we propose a gravitational model to describe the attentional shifts. Every single feature acts as an attractor and the shifts are the result of the joint effects of the attractors. In the current framework, the assumption of a single, centralized saliency map is no longer necessary, though still plausible. Quantitative results on two large image datasets show that this model predicts shifts more accurately than winner-take-all. Nature Publishing Group UK 2020-10-01 /pmc/articles/PMC7530662/ /pubmed/33005008 http://dx.doi.org/10.1038/s41598-020-73494-2 Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
spellingShingle | Article Zanca, Dario Gori, Marco Melacci, Stefano Rufa, Alessandra Gravitational models explain shifts on human visual attention |
title | Gravitational models explain shifts on human visual attention |
title_full | Gravitational models explain shifts on human visual attention |
title_fullStr | Gravitational models explain shifts on human visual attention |
title_full_unstemmed | Gravitational models explain shifts on human visual attention |
title_short | Gravitational models explain shifts on human visual attention |
title_sort | gravitational models explain shifts on human visual attention |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7530662/ https://www.ncbi.nlm.nih.gov/pubmed/33005008 http://dx.doi.org/10.1038/s41598-020-73494-2 |
work_keys_str_mv | AT zancadario gravitationalmodelsexplainshiftsonhumanvisualattention AT gorimarco gravitationalmodelsexplainshiftsonhumanvisualattention AT melaccistefano gravitationalmodelsexplainshiftsonhumanvisualattention AT rufaalessandra gravitationalmodelsexplainshiftsonhumanvisualattention |