Cargando…

BIAS-3D: Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment

We propose a brain inspired attentional search model for target search in a 3D environment, which has two separate channels—one for the object classification, analogous to the “what” pathway in the human visual system, and the other for prediction of the next location of the camera, analogous to the...

Descripción completa

Detalles Bibliográficos
Autores principales: Kumari, Sweta, Shobha Amala, V. Y., Nivethithan, M., Chakravarthy, V. Srinivasa
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9716564/
https://www.ncbi.nlm.nih.gov/pubmed/36465964
http://dx.doi.org/10.3389/fncom.2022.1012559
_version_ 1784842718523949056
author Kumari, Sweta
Shobha Amala, V. Y.
Nivethithan, M.
Chakravarthy, V. Srinivasa
author_facet Kumari, Sweta
Shobha Amala, V. Y.
Nivethithan, M.
Chakravarthy, V. Srinivasa
author_sort Kumari, Sweta
collection PubMed
description We propose a brain inspired attentional search model for target search in a 3D environment, which has two separate channels—one for the object classification, analogous to the “what” pathway in the human visual system, and the other for prediction of the next location of the camera, analogous to the “where” pathway. To evaluate the proposed model, we generated 3D Cluttered Cube datasets that consist of an image on one vertical face, and clutter or background images on the other faces. The camera goes around each cube on a circular orbit and determines the identity of the image pasted on the face. The images pasted on the cube faces were drawn from: MNIST handwriting digit, QuickDraw, and RGB MNIST handwriting digit datasets. The attentional input of three concentric cropped windows resembling the high-resolution central fovea and low-resolution periphery of the retina, flows through a Classifier Network and a Camera Motion Network. The Classifier Network classifies the current view into one of the target classes or the clutter. The Camera Motion Network predicts the camera's next position on the orbit (varying the azimuthal angle or “θ”). Here the camera performs one of three actions: move right, move left, or do not move. The Camera-Position Network adds the camera's current position (θ) into the higher features level of the Classifier Network and the Camera Motion Network. The Camera Motion Network is trained using Q-learning where the reward is 1 if the classifier network gives the correct classification, otherwise 0. Total loss is computed by adding the mean square loss of temporal difference and cross entropy loss. Then the model is trained end-to-end by backpropagating the total loss using Adam optimizer. Results on two grayscale image datasets and one RGB image dataset show that the proposed model is successfully able to discover the desired search pattern to find the target face on the cube, and also classify the target face accurately.
format Online
Article
Text
id pubmed-9716564
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-97165642022-12-03 BIAS-3D: Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment Kumari, Sweta Shobha Amala, V. Y. Nivethithan, M. Chakravarthy, V. Srinivasa Front Comput Neurosci Neuroscience We propose a brain inspired attentional search model for target search in a 3D environment, which has two separate channels—one for the object classification, analogous to the “what” pathway in the human visual system, and the other for prediction of the next location of the camera, analogous to the “where” pathway. To evaluate the proposed model, we generated 3D Cluttered Cube datasets that consist of an image on one vertical face, and clutter or background images on the other faces. The camera goes around each cube on a circular orbit and determines the identity of the image pasted on the face. The images pasted on the cube faces were drawn from: MNIST handwriting digit, QuickDraw, and RGB MNIST handwriting digit datasets. The attentional input of three concentric cropped windows resembling the high-resolution central fovea and low-resolution periphery of the retina, flows through a Classifier Network and a Camera Motion Network. The Classifier Network classifies the current view into one of the target classes or the clutter. The Camera Motion Network predicts the camera's next position on the orbit (varying the azimuthal angle or “θ”). Here the camera performs one of three actions: move right, move left, or do not move. The Camera-Position Network adds the camera's current position (θ) into the higher features level of the Classifier Network and the Camera Motion Network. The Camera Motion Network is trained using Q-learning where the reward is 1 if the classifier network gives the correct classification, otherwise 0. Total loss is computed by adding the mean square loss of temporal difference and cross entropy loss. Then the model is trained end-to-end by backpropagating the total loss using Adam optimizer. Results on two grayscale image datasets and one RGB image dataset show that the proposed model is successfully able to discover the desired search pattern to find the target face on the cube, and also classify the target face accurately. Frontiers Media S.A. 2022-11-18 /pmc/articles/PMC9716564/ /pubmed/36465964 http://dx.doi.org/10.3389/fncom.2022.1012559 Text en Copyright © 2022 Kumari, Shobha Amala, Nivethithan and Chakravarthy. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Kumari, Sweta
Shobha Amala, V. Y.
Nivethithan, M.
Chakravarthy, V. Srinivasa
BIAS-3D: Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment
title BIAS-3D: Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment
title_full BIAS-3D: Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment
title_fullStr BIAS-3D: Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment
title_full_unstemmed BIAS-3D: Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment
title_short BIAS-3D: Brain inspired attentional search model fashioned after what and where/how pathways for target search in 3D environment
title_sort bias-3d: brain inspired attentional search model fashioned after what and where/how pathways for target search in 3d environment
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9716564/
https://www.ncbi.nlm.nih.gov/pubmed/36465964
http://dx.doi.org/10.3389/fncom.2022.1012559
work_keys_str_mv AT kumarisweta bias3dbraininspiredattentionalsearchmodelfashionedafterwhatandwherehowpathwaysfortargetsearchin3denvironment
AT shobhaamalavy bias3dbraininspiredattentionalsearchmodelfashionedafterwhatandwherehowpathwaysfortargetsearchin3denvironment
AT nivethithanm bias3dbraininspiredattentionalsearchmodelfashionedafterwhatandwherehowpathwaysfortargetsearchin3denvironment
AT chakravarthyvsrinivasa bias3dbraininspiredattentionalsearchmodelfashionedafterwhatandwherehowpathwaysfortargetsearchin3denvironment