Cargando…
Learning joints relation graphs for video action recognition
Previous video action recognition mainly focuses on extracting spatial and temporal features from videos or capturing physical dependencies among joints. The relation between joints is often ignored. Modeling the relation between joints is important for action recognition. Aiming at learning discrim...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9597689/ https://www.ncbi.nlm.nih.gov/pubmed/36310629 http://dx.doi.org/10.3389/fnbot.2022.918434 |
_version_ | 1784816150635347968 |
---|---|
author | Liu, Xiaodong Xu, Huating Wang, Miao |
author_facet | Liu, Xiaodong Xu, Huating Wang, Miao |
author_sort | Liu, Xiaodong |
collection | PubMed |
description | Previous video action recognition mainly focuses on extracting spatial and temporal features from videos or capturing physical dependencies among joints. The relation between joints is often ignored. Modeling the relation between joints is important for action recognition. Aiming at learning discriminative relation between joints, this paper proposes a joint spatial-temporal reasoning (JSTR) framework to recognize action from videos. For the spatial representation, a joints spatial relation graph is built to capture position relations between joints. For the temporal representation, temporal information of body joints is modeled by the intra-joint temporal relation graph. The spatial reasoning feature and the temporal reasoning feature are fused to recognize action from videos. The effectiveness of our method is demonstrated in three real-world video action recognition datasets. The experiment results display good performance across all of these datasets. |
format | Online Article Text |
id | pubmed-9597689 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-95976892022-10-27 Learning joints relation graphs for video action recognition Liu, Xiaodong Xu, Huating Wang, Miao Front Neurorobot Neuroscience Previous video action recognition mainly focuses on extracting spatial and temporal features from videos or capturing physical dependencies among joints. The relation between joints is often ignored. Modeling the relation between joints is important for action recognition. Aiming at learning discriminative relation between joints, this paper proposes a joint spatial-temporal reasoning (JSTR) framework to recognize action from videos. For the spatial representation, a joints spatial relation graph is built to capture position relations between joints. For the temporal representation, temporal information of body joints is modeled by the intra-joint temporal relation graph. The spatial reasoning feature and the temporal reasoning feature are fused to recognize action from videos. The effectiveness of our method is demonstrated in three real-world video action recognition datasets. The experiment results display good performance across all of these datasets. Frontiers Media S.A. 2022-10-11 /pmc/articles/PMC9597689/ /pubmed/36310629 http://dx.doi.org/10.3389/fnbot.2022.918434 Text en Copyright © 2022 Liu, Xu and Wang. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Liu, Xiaodong Xu, Huating Wang, Miao Learning joints relation graphs for video action recognition |
title | Learning joints relation graphs for video action recognition |
title_full | Learning joints relation graphs for video action recognition |
title_fullStr | Learning joints relation graphs for video action recognition |
title_full_unstemmed | Learning joints relation graphs for video action recognition |
title_short | Learning joints relation graphs for video action recognition |
title_sort | learning joints relation graphs for video action recognition |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9597689/ https://www.ncbi.nlm.nih.gov/pubmed/36310629 http://dx.doi.org/10.3389/fnbot.2022.918434 |
work_keys_str_mv | AT liuxiaodong learningjointsrelationgraphsforvideoactionrecognition AT xuhuating learningjointsrelationgraphsforvideoactionrecognition AT wangmiao learningjointsrelationgraphsforvideoactionrecognition |