Cargando…

Follower: A Novel Self-Deployable Action Recognition Framework

Deep learning technology has improved the performance of vision-based action recognition algorithms, but such methods require a large number of labeled training datasets, resulting in weak universality. To address this issue, this paper proposes a novel self-deployable ubiquitous action recognition...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Xu, Liu, Dongjingdian, Liu, Jing, Yan, Faren, Chen, Pengpeng, Niu, Qiang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7867099/
https://www.ncbi.nlm.nih.gov/pubmed/33535389
http://dx.doi.org/10.3390/s21030950
_version_ 1783648226137604096
author Yang, Xu
Liu, Dongjingdian
Liu, Jing
Yan, Faren
Chen, Pengpeng
Niu, Qiang
author_facet Yang, Xu
Liu, Dongjingdian
Liu, Jing
Yan, Faren
Chen, Pengpeng
Niu, Qiang
author_sort Yang, Xu
collection PubMed
description Deep learning technology has improved the performance of vision-based action recognition algorithms, but such methods require a large number of labeled training datasets, resulting in weak universality. To address this issue, this paper proposes a novel self-deployable ubiquitous action recognition framework that enables a self-motivated user to bootstrap and deploy action recognition services, called FOLLOWER. Our main idea is to build a “fingerprint” library of actions based on a small number of user-defined sample action data. Then, we use the matching method to complete action recognition. The key step is how to construct a suitable “fingerprint”. Thus, a pose action normalized feature extraction method based on a three-dimensional pose sequence is designed. FOLLOWER is mainly composed of the guide process and follow the process. Guide process extracts pose action normalized feature and selects the inner class central feature to build a “fingerprint” library of actions. Follow process extracts the pose action normalized feature in the target video and uses the motion detection, action filtering, and adaptive weight offset template to identify the action in the video sequence. Finally, we collect an action video dataset with human pose annotation to research self-deployable action recognition and action recognition based on pose estimation. After experimenting on this dataset, the results show that FOLLOWER can effectively recognize the actions in the video sequence with recognition accuracy reaching 96.74%.
format Online
Article
Text
id pubmed-7867099
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-78670992021-02-07 Follower: A Novel Self-Deployable Action Recognition Framework Yang, Xu Liu, Dongjingdian Liu, Jing Yan, Faren Chen, Pengpeng Niu, Qiang Sensors (Basel) Article Deep learning technology has improved the performance of vision-based action recognition algorithms, but such methods require a large number of labeled training datasets, resulting in weak universality. To address this issue, this paper proposes a novel self-deployable ubiquitous action recognition framework that enables a self-motivated user to bootstrap and deploy action recognition services, called FOLLOWER. Our main idea is to build a “fingerprint” library of actions based on a small number of user-defined sample action data. Then, we use the matching method to complete action recognition. The key step is how to construct a suitable “fingerprint”. Thus, a pose action normalized feature extraction method based on a three-dimensional pose sequence is designed. FOLLOWER is mainly composed of the guide process and follow the process. Guide process extracts pose action normalized feature and selects the inner class central feature to build a “fingerprint” library of actions. Follow process extracts the pose action normalized feature in the target video and uses the motion detection, action filtering, and adaptive weight offset template to identify the action in the video sequence. Finally, we collect an action video dataset with human pose annotation to research self-deployable action recognition and action recognition based on pose estimation. After experimenting on this dataset, the results show that FOLLOWER can effectively recognize the actions in the video sequence with recognition accuracy reaching 96.74%. MDPI 2021-02-01 /pmc/articles/PMC7867099/ /pubmed/33535389 http://dx.doi.org/10.3390/s21030950 Text en © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yang, Xu
Liu, Dongjingdian
Liu, Jing
Yan, Faren
Chen, Pengpeng
Niu, Qiang
Follower: A Novel Self-Deployable Action Recognition Framework
title Follower: A Novel Self-Deployable Action Recognition Framework
title_full Follower: A Novel Self-Deployable Action Recognition Framework
title_fullStr Follower: A Novel Self-Deployable Action Recognition Framework
title_full_unstemmed Follower: A Novel Self-Deployable Action Recognition Framework
title_short Follower: A Novel Self-Deployable Action Recognition Framework
title_sort follower: a novel self-deployable action recognition framework
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7867099/
https://www.ncbi.nlm.nih.gov/pubmed/33535389
http://dx.doi.org/10.3390/s21030950
work_keys_str_mv AT yangxu followeranovelselfdeployableactionrecognitionframework
AT liudongjingdian followeranovelselfdeployableactionrecognitionframework
AT liujing followeranovelselfdeployableactionrecognitionframework
AT yanfaren followeranovelselfdeployableactionrecognitionframework
AT chenpengpeng followeranovelselfdeployableactionrecognitionframework
AT niuqiang followeranovelselfdeployableactionrecognitionframework