Cargando…

Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion...

Descripción completa

Detalles Bibliográficos
Autor principal: Yao, Li
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi Publishing Corporation 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5021910/
https://www.ncbi.nlm.nih.gov/pubmed/27656199
http://dx.doi.org/10.1155/2016/1760172
_version_ 1782453418430824448
author Yao, Li
author_facet Yao, Li
author_sort Yao, Li
collection PubMed
description Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results.
format Online
Article
Text
id pubmed-5021910
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Hindawi Publishing Corporation
record_format MEDLINE/PubMed
spelling pubmed-50219102016-09-21 Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos Yao, Li Comput Intell Neurosci Research Article Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results. Hindawi Publishing Corporation 2016 2016-08-29 /pmc/articles/PMC5021910/ /pubmed/27656199 http://dx.doi.org/10.1155/2016/1760172 Text en Copyright © 2016 Li Yao. https://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Yao, Li
Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos
title Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos
title_full Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos
title_fullStr Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos
title_full_unstemmed Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos
title_short Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos
title_sort extract the relational information of static features and motion features for human activities recognition in videos
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5021910/
https://www.ncbi.nlm.nih.gov/pubmed/27656199
http://dx.doi.org/10.1155/2016/1760172
work_keys_str_mv AT yaoli extracttherelationalinformationofstaticfeaturesandmotionfeaturesforhumanactivitiesrecognitioninvideos