Cargando…

An efficient self-attention network for skeleton-based action recognition

There has been significant progress in skeleton-based action recognition. Human skeleton can be naturally structured into graph, so graph convolution networks have become the most popular method in this task. Most of these state-of-the-art methods optimized the structure of human skeleton graph to o...

Descripción completa

Detalles Bibliográficos
Autores principales: Qin, Xiaofei, Cai, Rui, Yu, Jiabin, He, Changxiang, Zhang, Xuedian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8904510/
https://www.ncbi.nlm.nih.gov/pubmed/35260760
http://dx.doi.org/10.1038/s41598-022-08157-5
_version_ 1784664967461470208
author Qin, Xiaofei
Cai, Rui
Yu, Jiabin
He, Changxiang
Zhang, Xuedian
author_facet Qin, Xiaofei
Cai, Rui
Yu, Jiabin
He, Changxiang
Zhang, Xuedian
author_sort Qin, Xiaofei
collection PubMed
description There has been significant progress in skeleton-based action recognition. Human skeleton can be naturally structured into graph, so graph convolution networks have become the most popular method in this task. Most of these state-of-the-art methods optimized the structure of human skeleton graph to obtain better performance. Based on these advanced algorithms, a simple but strong network is proposed with three major contributions. Firstly, inspired by some adaptive graph convolution networks and non-local blocks, some kinds of self-attention modules are designed to exploit spatial and temporal dependencies and dynamically optimize the graph structure. Secondly, a light but efficient architecture of network is designed for skeleton-based action recognition. Moreover, a trick is proposed to enrich the skeleton data with bones connection information and make obvious improvement to the performance. The method achieves 90.5% accuracy on cross-subjects setting (NTU60), with 0.89M parameters and 0.32 GMACs of computation cost. This work is expected to inspire new ideas for the field.
format Online
Article
Text
id pubmed-8904510
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-89045102022-03-09 An efficient self-attention network for skeleton-based action recognition Qin, Xiaofei Cai, Rui Yu, Jiabin He, Changxiang Zhang, Xuedian Sci Rep Article There has been significant progress in skeleton-based action recognition. Human skeleton can be naturally structured into graph, so graph convolution networks have become the most popular method in this task. Most of these state-of-the-art methods optimized the structure of human skeleton graph to obtain better performance. Based on these advanced algorithms, a simple but strong network is proposed with three major contributions. Firstly, inspired by some adaptive graph convolution networks and non-local blocks, some kinds of self-attention modules are designed to exploit spatial and temporal dependencies and dynamically optimize the graph structure. Secondly, a light but efficient architecture of network is designed for skeleton-based action recognition. Moreover, a trick is proposed to enrich the skeleton data with bones connection information and make obvious improvement to the performance. The method achieves 90.5% accuracy on cross-subjects setting (NTU60), with 0.89M parameters and 0.32 GMACs of computation cost. This work is expected to inspire new ideas for the field. Nature Publishing Group UK 2022-03-08 /pmc/articles/PMC8904510/ /pubmed/35260760 http://dx.doi.org/10.1038/s41598-022-08157-5 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Qin, Xiaofei
Cai, Rui
Yu, Jiabin
He, Changxiang
Zhang, Xuedian
An efficient self-attention network for skeleton-based action recognition
title An efficient self-attention network for skeleton-based action recognition
title_full An efficient self-attention network for skeleton-based action recognition
title_fullStr An efficient self-attention network for skeleton-based action recognition
title_full_unstemmed An efficient self-attention network for skeleton-based action recognition
title_short An efficient self-attention network for skeleton-based action recognition
title_sort efficient self-attention network for skeleton-based action recognition
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8904510/
https://www.ncbi.nlm.nih.gov/pubmed/35260760
http://dx.doi.org/10.1038/s41598-022-08157-5
work_keys_str_mv AT qinxiaofei anefficientselfattentionnetworkforskeletonbasedactionrecognition
AT cairui anefficientselfattentionnetworkforskeletonbasedactionrecognition
AT yujiabin anefficientselfattentionnetworkforskeletonbasedactionrecognition
AT hechangxiang anefficientselfattentionnetworkforskeletonbasedactionrecognition
AT zhangxuedian anefficientselfattentionnetworkforskeletonbasedactionrecognition
AT qinxiaofei efficientselfattentionnetworkforskeletonbasedactionrecognition
AT cairui efficientselfattentionnetworkforskeletonbasedactionrecognition
AT yujiabin efficientselfattentionnetworkforskeletonbasedactionrecognition
AT hechangxiang efficientselfattentionnetworkforskeletonbasedactionrecognition
AT zhangxuedian efficientselfattentionnetworkforskeletonbasedactionrecognition