Cargando…

Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition

Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the prima...

Descripción completa

Detalles Bibliográficos
Autores principales: Shu, Na, Gao, Zhiyong, Chen, Xiangan, Liu, Haihua
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489578/
https://www.ncbi.nlm.nih.gov/pubmed/26132270
http://dx.doi.org/10.1371/journal.pone.0130569
_version_ 1782379377697226752
author Shu, Na
Gao, Zhiyong
Chen, Xiangan
Liu, Haihua
author_facet Shu, Na
Gao, Zhiyong
Chen, Xiangan
Liu, Haihua
author_sort Shu, Na
collection PubMed
description Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model.
format Online
Article
Text
id pubmed-4489578
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-44895782015-07-14 Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition Shu, Na Gao, Zhiyong Chen, Xiangan Liu, Haihua PLoS One Research Article Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. Public Library of Science 2015-07-01 /pmc/articles/PMC4489578/ /pubmed/26132270 http://dx.doi.org/10.1371/journal.pone.0130569 Text en © 2015 Shu et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Shu, Na
Gao, Zhiyong
Chen, Xiangan
Liu, Haihua
Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition
title Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition
title_full Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition
title_fullStr Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition
title_full_unstemmed Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition
title_short Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition
title_sort computational model of primary visual cortex combining visual attention for action recognition
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4489578/
https://www.ncbi.nlm.nih.gov/pubmed/26132270
http://dx.doi.org/10.1371/journal.pone.0130569
work_keys_str_mv AT shuna computationalmodelofprimaryvisualcortexcombiningvisualattentionforactionrecognition
AT gaozhiyong computationalmodelofprimaryvisualcortexcombiningvisualattentionforactionrecognition
AT chenxiangan computationalmodelofprimaryvisualcortexcombiningvisualattentionforactionrecognition
AT liuhaihua computationalmodelofprimaryvisualcortexcombiningvisualattentionforactionrecognition