Cargando…

A brain-inspired object-based attention network for multiobject recognition and visual reasoning

The visual system uses sequences of selective glimpses to objects to support goal-directed behavior, but how is this attention control learned? Here we present an encoder–decoder model inspired by the interacting bottom-up and top-down visual pathways making up the recognition-attention system in th...

Descripción completa

Detalles Bibliográficos
Autores principales: Adeli, Hossein, Ahn, Seoyoung, Zelinsky, Gregory J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Association for Research in Vision and Ophthalmology 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10210512/
https://www.ncbi.nlm.nih.gov/pubmed/37212782
http://dx.doi.org/10.1167/jov.23.5.16
_version_ 1785047081267757056
author Adeli, Hossein
Ahn, Seoyoung
Zelinsky, Gregory J.
author_facet Adeli, Hossein
Ahn, Seoyoung
Zelinsky, Gregory J.
author_sort Adeli, Hossein
collection PubMed
description The visual system uses sequences of selective glimpses to objects to support goal-directed behavior, but how is this attention control learned? Here we present an encoder–decoder model inspired by the interacting bottom-up and top-down visual pathways making up the recognition-attention system in the brain. At every iteration, a new glimpse is taken from the image and is processed through the “what” encoder, a hierarchy of feedforward, recurrent, and capsule layers, to obtain an object-centric (object-file) representation. This representation feeds to the “where” decoder, where the evolving recurrent representation provides top-down attentional modulation to plan subsequent glimpses and impact routing in the encoder. We demonstrate how the attention mechanism significantly improves the accuracy of classifying highly overlapping digits. In a visual reasoning task requiring comparison of two objects, our model achieves near-perfect accuracy and significantly outperforms larger models in generalizing to unseen stimuli. Our work demonstrates the benefits of object-based attention mechanisms taking sequential glimpses of objects.
format Online
Article
Text
id pubmed-10210512
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher The Association for Research in Vision and Ophthalmology
record_format MEDLINE/PubMed
spelling pubmed-102105122023-05-26 A brain-inspired object-based attention network for multiobject recognition and visual reasoning Adeli, Hossein Ahn, Seoyoung Zelinsky, Gregory J. J Vis Article The visual system uses sequences of selective glimpses to objects to support goal-directed behavior, but how is this attention control learned? Here we present an encoder–decoder model inspired by the interacting bottom-up and top-down visual pathways making up the recognition-attention system in the brain. At every iteration, a new glimpse is taken from the image and is processed through the “what” encoder, a hierarchy of feedforward, recurrent, and capsule layers, to obtain an object-centric (object-file) representation. This representation feeds to the “where” decoder, where the evolving recurrent representation provides top-down attentional modulation to plan subsequent glimpses and impact routing in the encoder. We demonstrate how the attention mechanism significantly improves the accuracy of classifying highly overlapping digits. In a visual reasoning task requiring comparison of two objects, our model achieves near-perfect accuracy and significantly outperforms larger models in generalizing to unseen stimuli. Our work demonstrates the benefits of object-based attention mechanisms taking sequential glimpses of objects. The Association for Research in Vision and Ophthalmology 2023-05-22 /pmc/articles/PMC10210512/ /pubmed/37212782 http://dx.doi.org/10.1167/jov.23.5.16 Text en Copyright 2023 The Authors https://creativecommons.org/licenses/by/4.0/This work is licensed under a Creative Commons Attribution 4.0 International License.
spellingShingle Article
Adeli, Hossein
Ahn, Seoyoung
Zelinsky, Gregory J.
A brain-inspired object-based attention network for multiobject recognition and visual reasoning
title A brain-inspired object-based attention network for multiobject recognition and visual reasoning
title_full A brain-inspired object-based attention network for multiobject recognition and visual reasoning
title_fullStr A brain-inspired object-based attention network for multiobject recognition and visual reasoning
title_full_unstemmed A brain-inspired object-based attention network for multiobject recognition and visual reasoning
title_short A brain-inspired object-based attention network for multiobject recognition and visual reasoning
title_sort brain-inspired object-based attention network for multiobject recognition and visual reasoning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10210512/
https://www.ncbi.nlm.nih.gov/pubmed/37212782
http://dx.doi.org/10.1167/jov.23.5.16
work_keys_str_mv AT adelihossein abraininspiredobjectbasedattentionnetworkformultiobjectrecognitionandvisualreasoning
AT ahnseoyoung abraininspiredobjectbasedattentionnetworkformultiobjectrecognitionandvisualreasoning
AT zelinskygregoryj abraininspiredobjectbasedattentionnetworkformultiobjectrecognitionandvisualreasoning
AT adelihossein braininspiredobjectbasedattentionnetworkformultiobjectrecognitionandvisualreasoning
AT ahnseoyoung braininspiredobjectbasedattentionnetworkformultiobjectrecognitionandvisualreasoning
AT zelinskygregoryj braininspiredobjectbasedattentionnetworkformultiobjectrecognitionandvisualreasoning