Cargando…

Guiding visual attention in deep convolutional neural networks based on human eye movements

Deep Convolutional Neural Networks (DCNNs) were originally inspired by principles of biological vision, have evolved into best current computational models of object recognition, and consequently indicate strong architectural and functional parallelism with the ventral visual pathway throughout comp...

Descripción completa

Detalles Bibliográficos
Autores principales: van Dyck, Leonard Elia, Denzler, Sebastian Jochen, Gruber, Walter Roland
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9514055/
https://www.ncbi.nlm.nih.gov/pubmed/36177359
http://dx.doi.org/10.3389/fnins.2022.975639
_version_ 1784798195222577152
author van Dyck, Leonard Elia
Denzler, Sebastian Jochen
Gruber, Walter Roland
author_facet van Dyck, Leonard Elia
Denzler, Sebastian Jochen
Gruber, Walter Roland
author_sort van Dyck, Leonard Elia
collection PubMed
description Deep Convolutional Neural Networks (DCNNs) were originally inspired by principles of biological vision, have evolved into best current computational models of object recognition, and consequently indicate strong architectural and functional parallelism with the ventral visual pathway throughout comparisons with neuroimaging and neural time series data. As recent advances in deep learning seem to decrease this similarity, computational neuroscience is challenged to reverse-engineer the biological plausibility to obtain useful models. While previous studies have shown that biologically inspired architectures are able to amplify the human-likeness of the models, in this study, we investigate a purely data-driven approach. We use human eye tracking data to directly modify training examples and thereby guide the models’ visual attention during object recognition in natural images either toward or away from the focus of human fixations. We compare and validate different manipulation types (i.e., standard, human-like, and non-human-like attention) through GradCAM saliency maps against human participant eye tracking data. Our results demonstrate that the proposed guided focus manipulation works as intended in the negative direction and non-human-like models focus on significantly dissimilar image parts compared to humans. The observed effects were highly category-specific, enhanced by animacy and face presence, developed only after feedforward processing was completed, and indicated a strong influence on face detection. With this approach, however, no significantly increased human-likeness was found. Possible applications of overt visual attention in DCNNs and further implications for theories of face detection are discussed.
format Online
Article
Text
id pubmed-9514055
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-95140552022-09-28 Guiding visual attention in deep convolutional neural networks based on human eye movements van Dyck, Leonard Elia Denzler, Sebastian Jochen Gruber, Walter Roland Front Neurosci Neuroscience Deep Convolutional Neural Networks (DCNNs) were originally inspired by principles of biological vision, have evolved into best current computational models of object recognition, and consequently indicate strong architectural and functional parallelism with the ventral visual pathway throughout comparisons with neuroimaging and neural time series data. As recent advances in deep learning seem to decrease this similarity, computational neuroscience is challenged to reverse-engineer the biological plausibility to obtain useful models. While previous studies have shown that biologically inspired architectures are able to amplify the human-likeness of the models, in this study, we investigate a purely data-driven approach. We use human eye tracking data to directly modify training examples and thereby guide the models’ visual attention during object recognition in natural images either toward or away from the focus of human fixations. We compare and validate different manipulation types (i.e., standard, human-like, and non-human-like attention) through GradCAM saliency maps against human participant eye tracking data. Our results demonstrate that the proposed guided focus manipulation works as intended in the negative direction and non-human-like models focus on significantly dissimilar image parts compared to humans. The observed effects were highly category-specific, enhanced by animacy and face presence, developed only after feedforward processing was completed, and indicated a strong influence on face detection. With this approach, however, no significantly increased human-likeness was found. Possible applications of overt visual attention in DCNNs and further implications for theories of face detection are discussed. Frontiers Media S.A. 2022-09-13 /pmc/articles/PMC9514055/ /pubmed/36177359 http://dx.doi.org/10.3389/fnins.2022.975639 Text en Copyright © 2022 van Dyck, Denzler and Gruber. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
van Dyck, Leonard Elia
Denzler, Sebastian Jochen
Gruber, Walter Roland
Guiding visual attention in deep convolutional neural networks based on human eye movements
title Guiding visual attention in deep convolutional neural networks based on human eye movements
title_full Guiding visual attention in deep convolutional neural networks based on human eye movements
title_fullStr Guiding visual attention in deep convolutional neural networks based on human eye movements
title_full_unstemmed Guiding visual attention in deep convolutional neural networks based on human eye movements
title_short Guiding visual attention in deep convolutional neural networks based on human eye movements
title_sort guiding visual attention in deep convolutional neural networks based on human eye movements
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9514055/
https://www.ncbi.nlm.nih.gov/pubmed/36177359
http://dx.doi.org/10.3389/fnins.2022.975639
work_keys_str_mv AT vandyckleonardelia guidingvisualattentionindeepconvolutionalneuralnetworksbasedonhumaneyemovements
AT denzlersebastianjochen guidingvisualattentionindeepconvolutionalneuralnetworksbasedonhumaneyemovements
AT gruberwalterroland guidingvisualattentionindeepconvolutionalneuralnetworksbasedonhumaneyemovements