Cargando…

Deep Learning-Based Yoga Posture Recognition Using the Y_PN-MSSD Model for Yoga Practitioners

In today’s digital world, and in light of the growing pandemic, many yoga instructors opt to teach online. However, even after learning or being trained by the best sources available, such as videos, blogs, journals, or essays, there is no live tracking available to the user to see if he or she is h...

Descripción completa

Detalles Bibliográficos
Autores principales: Upadhyay, Aman, Basha, Niha Kamal, Ananthakrishnan, Balasundaram
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9956159/
https://www.ncbi.nlm.nih.gov/pubmed/36833142
http://dx.doi.org/10.3390/healthcare11040609
Descripción
Sumario:In today’s digital world, and in light of the growing pandemic, many yoga instructors opt to teach online. However, even after learning or being trained by the best sources available, such as videos, blogs, journals, or essays, there is no live tracking available to the user to see if he or she is holding poses appropriately, which can lead to body posture issues and health issues later in life. Existing technology can assist in this regard; however, beginner-level yoga practitioners have no means of knowing whether their position is good or poor without the instructor’s help. As a result, the automatic assessment of yoga postures is proposed for yoga posture recognition, which can alert practitioners by using the Y_PN-MSSD model, in which Pose-Net and Mobile-Net SSD (together named as TFlite Movenet) play a major role. The Pose-Net layer takes care of the feature point detection, while the mobile-net SSD layer performs human detection in each frame. The model is categorized into three stages. Initially, there is the data collection/preparation stage, where the yoga postures are captured from four users as well as an open-source dataset with seven yoga poses. Then, by using these collected data, the model undergoes training where the feature extraction takes place by connecting key points of the human body. Finally, the yoga posture is recognized and the model assists the user through yoga poses by live-tracking them, as well as correcting them on the fly with 99.88% accuracy. Comparatively, this model outperforms the performance of the Pose-Net CNN model. As a result, the model can be used as a starting point for creating a system that will help humans practice yoga with the help of a clever, inexpensive, and impressive virtual yoga trainer.