Cargando…

Long Jump Action Recognition Based on Deep Convolutional Neural Network

Long jump is a test item of national student physical health monitoring, which can reflect the quality of students' lower limb strength. Long jump is a highly technical activity, which includes four basic movements: running aid, jumping, vacating, and landing. Many students have problems with t...

Descripción completa

Detalles Bibliográficos
Autor principal: Wang, Zhiteng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9168106/
https://www.ncbi.nlm.nih.gov/pubmed/35676962
http://dx.doi.org/10.1155/2022/3832118
Descripción
Sumario:Long jump is a test item of national student physical health monitoring, which can reflect the quality of students' lower limb strength. Long jump is a highly technical activity, which includes four basic movements: running aid, jumping, vacating, and landing. Many students have problems with the technical aspects, resulting in test scores that do not objectively reflect the true physical condition of the students, which affects the accuracy of the test results. From the perspective of rapid diagnostic feedback of students' long jump movements, we design and develop a long jump movement recognition method based on deep convolutional neural network. In this paper, we firstly summarize the traditional visual action recognition algorithm, then apply 3D convolution to extract the spatiotemporal features of long jump action from three directions of the video block, and fuse the spatiotemporal features of the three directions in different ways to achieve feature complementation; finally, using the multimodality of long jump action data, we use 3D convolutional neural network to train the RGB images and then train the depth. This joint training method can accelerate the convergence speed and improve the accuracy of the network on both depth and edge images. The experiments compared the recognition effects of the tandem fusion of features, the maximum fusion, and the multiplicative fusion in the scoring layer, and the highest accuracy of 82.3% was achieved by the tandem fusion of features with the fusion of three modalities.