Cargando…
Dynamic Edge Convolutional Neural Network for Skeleton-Based Human Action Recognition
To provide accessible, intelligent, and efficient remote access such as the internet of things, rehabilitation, autonomous driving, virtual games, and healthcare, human action recognition (HAR) has gained much attention among computer vision researchers. Several methods have already been addressed t...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9864180/ https://www.ncbi.nlm.nih.gov/pubmed/36679576 http://dx.doi.org/10.3390/s23020778 |
_version_ | 1784875519309774848 |
---|---|
author | Tasnim, Nusrat Baek, Joong-Hwan |
author_facet | Tasnim, Nusrat Baek, Joong-Hwan |
author_sort | Tasnim, Nusrat |
collection | PubMed |
description | To provide accessible, intelligent, and efficient remote access such as the internet of things, rehabilitation, autonomous driving, virtual games, and healthcare, human action recognition (HAR) has gained much attention among computer vision researchers. Several methods have already been addressed to ensure effective and efficient action recognition based on different perspectives including data modalities, feature design, network configuration, and application domains. In this article, we design a new deep learning model by integrating criss-cross attention and edge convolution to extract discriminative features from the skeleton sequence for action recognition. The attention mechanism is applied in spatial and temporal directions to pursue the intra- and inter-frame relationships. Then, several edge convolutional layers are conducted to explore the geometric relationships among the neighboring joints in the human body. The proposed model is dynamically updated after each layer by recomputing the graph on the basis of k-nearest joints for learning local and global information in action sequences. We used publicly available benchmark skeleton datasets such as UTD-MHAD (University of Texas at Dallas multimodal human action dataset) and MSR-Action3D (Microsoft action 3D) to evaluate the proposed method. We also investigated the proposed method with different configurations of network architectures to assure effectiveness and robustness. The proposed method achieved average accuracies of 99.53% and 95.64% on the UTD-MHAD and MSR-Action3D datasets, respectively, outperforming state-of-the-art methods. |
format | Online Article Text |
id | pubmed-9864180 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-98641802023-01-22 Dynamic Edge Convolutional Neural Network for Skeleton-Based Human Action Recognition Tasnim, Nusrat Baek, Joong-Hwan Sensors (Basel) Article To provide accessible, intelligent, and efficient remote access such as the internet of things, rehabilitation, autonomous driving, virtual games, and healthcare, human action recognition (HAR) has gained much attention among computer vision researchers. Several methods have already been addressed to ensure effective and efficient action recognition based on different perspectives including data modalities, feature design, network configuration, and application domains. In this article, we design a new deep learning model by integrating criss-cross attention and edge convolution to extract discriminative features from the skeleton sequence for action recognition. The attention mechanism is applied in spatial and temporal directions to pursue the intra- and inter-frame relationships. Then, several edge convolutional layers are conducted to explore the geometric relationships among the neighboring joints in the human body. The proposed model is dynamically updated after each layer by recomputing the graph on the basis of k-nearest joints for learning local and global information in action sequences. We used publicly available benchmark skeleton datasets such as UTD-MHAD (University of Texas at Dallas multimodal human action dataset) and MSR-Action3D (Microsoft action 3D) to evaluate the proposed method. We also investigated the proposed method with different configurations of network architectures to assure effectiveness and robustness. The proposed method achieved average accuracies of 99.53% and 95.64% on the UTD-MHAD and MSR-Action3D datasets, respectively, outperforming state-of-the-art methods. MDPI 2023-01-10 /pmc/articles/PMC9864180/ /pubmed/36679576 http://dx.doi.org/10.3390/s23020778 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Tasnim, Nusrat Baek, Joong-Hwan Dynamic Edge Convolutional Neural Network for Skeleton-Based Human Action Recognition |
title | Dynamic Edge Convolutional Neural Network for Skeleton-Based Human Action Recognition |
title_full | Dynamic Edge Convolutional Neural Network for Skeleton-Based Human Action Recognition |
title_fullStr | Dynamic Edge Convolutional Neural Network for Skeleton-Based Human Action Recognition |
title_full_unstemmed | Dynamic Edge Convolutional Neural Network for Skeleton-Based Human Action Recognition |
title_short | Dynamic Edge Convolutional Neural Network for Skeleton-Based Human Action Recognition |
title_sort | dynamic edge convolutional neural network for skeleton-based human action recognition |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9864180/ https://www.ncbi.nlm.nih.gov/pubmed/36679576 http://dx.doi.org/10.3390/s23020778 |
work_keys_str_mv | AT tasnimnusrat dynamicedgeconvolutionalneuralnetworkforskeletonbasedhumanactionrecognition AT baekjoonghwan dynamicedgeconvolutionalneuralnetworkforskeletonbasedhumanactionrecognition |