Cargando…

Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning

Low-cost 3D video sensors equipped with routines for extracting skeleton data facilitate the widespread use of virtual reality (VR) for rehabilitation. However, the accuracy of the extracted skeleton data is often limited. Accuracy can be improved using a motion tracker, e.g., using a recurrent neur...

Descripción completa

Detalles Bibliográficos
Autores principales: Agami, Shahar, Riemer, Raziel, Berman, Sigal
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9397931/
https://www.ncbi.nlm.nih.gov/pubmed/36188943
http://dx.doi.org/10.3389/fresc.2022.956381
_version_ 1784772230846087168
author Agami, Shahar
Riemer, Raziel
Berman, Sigal
author_facet Agami, Shahar
Riemer, Raziel
Berman, Sigal
author_sort Agami, Shahar
collection PubMed
description Low-cost 3D video sensors equipped with routines for extracting skeleton data facilitate the widespread use of virtual reality (VR) for rehabilitation. However, the accuracy of the extracted skeleton data is often limited. Accuracy can be improved using a motion tracker, e.g., using a recurrent neural network (RNN). Yet, training an RNN requires a considerable amount of relevant and accurate training data. Training databases can be obtained using gold-standard motion tracking sensors. This limits the use of the RNN trackers in environments and tasks that lack accessibility to gold-standard sensors. Digital goniometers are typically cheaper, more portable, and simpler to use than gold-standard motion tracking sensors. The current work suggests a method for generating accurate skeleton data suitable for training an RNN motion tracker based on the offline fusion of a Kinect 3D video sensor and an electronic goniometer. The fusion applies nonlinear constraint optimization, where the constraints are based on an advanced shoulder-centered kinematic model of the arm. The model builds on the representation of the arm as a triangle (the arm triangle). The shoulder-centered representation of the arm triangle motion simplifies constraint representation and consequently the optimization problem. To test the performance of the offline fusion and the RNN trained using the optimized data, arm motion of eight participants was recorded using a Kinect sensor, an electronic goniometer, and, for comparison, a passive-marker-based motion tracker. The data generated by fusing the Kinect and goniometer recordings were used for training two long short-term memory (LSTM) RNNs. The input to one RNN included both the Kinect and the goniometer data, and the input to the second RNN included only Kinect data. The performance of the networks was compared to the performance of a tracker based on a Kalman filter and to the raw Kinect measurements. The accuracy of the fused data was high, and it considerably improved data accuracy. The accuracy for both trackers was high, and both were more accurate than the Kalman filter tracker and the raw Kinect measurements. The developed methods are suitable for integration with immersive VR rehabilitation systems in the clinic and the home environments.
format Online
Article
Text
id pubmed-9397931
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-93979312022-09-29 Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning Agami, Shahar Riemer, Raziel Berman, Sigal Front Rehabil Sci Rehabilitation Sciences Low-cost 3D video sensors equipped with routines for extracting skeleton data facilitate the widespread use of virtual reality (VR) for rehabilitation. However, the accuracy of the extracted skeleton data is often limited. Accuracy can be improved using a motion tracker, e.g., using a recurrent neural network (RNN). Yet, training an RNN requires a considerable amount of relevant and accurate training data. Training databases can be obtained using gold-standard motion tracking sensors. This limits the use of the RNN trackers in environments and tasks that lack accessibility to gold-standard sensors. Digital goniometers are typically cheaper, more portable, and simpler to use than gold-standard motion tracking sensors. The current work suggests a method for generating accurate skeleton data suitable for training an RNN motion tracker based on the offline fusion of a Kinect 3D video sensor and an electronic goniometer. The fusion applies nonlinear constraint optimization, where the constraints are based on an advanced shoulder-centered kinematic model of the arm. The model builds on the representation of the arm as a triangle (the arm triangle). The shoulder-centered representation of the arm triangle motion simplifies constraint representation and consequently the optimization problem. To test the performance of the offline fusion and the RNN trained using the optimized data, arm motion of eight participants was recorded using a Kinect sensor, an electronic goniometer, and, for comparison, a passive-marker-based motion tracker. The data generated by fusing the Kinect and goniometer recordings were used for training two long short-term memory (LSTM) RNNs. The input to one RNN included both the Kinect and the goniometer data, and the input to the second RNN included only Kinect data. The performance of the networks was compared to the performance of a tracker based on a Kalman filter and to the raw Kinect measurements. The accuracy of the fused data was high, and it considerably improved data accuracy. The accuracy for both trackers was high, and both were more accurate than the Kalman filter tracker and the raw Kinect measurements. The developed methods are suitable for integration with immersive VR rehabilitation systems in the clinic and the home environments. Frontiers Media S.A. 2022-08-16 /pmc/articles/PMC9397931/ /pubmed/36188943 http://dx.doi.org/10.3389/fresc.2022.956381 Text en © 2022 Agami, Rimer and Berman. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) (https://creativecommons.org/licenses/by/4.0/) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Rehabilitation Sciences
Agami, Shahar
Riemer, Raziel
Berman, Sigal
Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning
title Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning
title_full Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning
title_fullStr Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning
title_full_unstemmed Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning
title_short Enhancing motion tracking accuracy of a low-cost 3D video sensor using a biomechanical model, sensor fusion, and deep learning
title_sort enhancing motion tracking accuracy of a low-cost 3d video sensor using a biomechanical model, sensor fusion, and deep learning
topic Rehabilitation Sciences
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9397931/
https://www.ncbi.nlm.nih.gov/pubmed/36188943
http://dx.doi.org/10.3389/fresc.2022.956381
work_keys_str_mv AT agamishahar enhancingmotiontrackingaccuracyofalowcost3dvideosensorusingabiomechanicalmodelsensorfusionanddeeplearning
AT riemerraziel enhancingmotiontrackingaccuracyofalowcost3dvideosensorusingabiomechanicalmodelsensorfusionanddeeplearning
AT bermansigal enhancingmotiontrackingaccuracyofalowcost3dvideosensorusingabiomechanicalmodelsensorfusionanddeeplearning