Cargando…

Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction

Tracking movements of the body in a natural living environment of a person is a challenging undertaking. Such tracking information can be used as a part of detecting any onsets of anomalies in movement patterns or as a part of a remote monitoring environment. The tracking information can be mapped a...

Descripción completa

Detalles Bibliográficos
Autores principales: Payandeh, Shahram, Wael, Jeffrey
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8460365/
https://www.ncbi.nlm.nih.gov/pubmed/34567110
http://dx.doi.org/10.1155/2021/5551753
_version_ 1784571733840232448
author Payandeh, Shahram
Wael, Jeffrey
author_facet Payandeh, Shahram
Wael, Jeffrey
author_sort Payandeh, Shahram
collection PubMed
description Tracking movements of the body in a natural living environment of a person is a challenging undertaking. Such tracking information can be used as a part of detecting any onsets of anomalies in movement patterns or as a part of a remote monitoring environment. The tracking information can be mapped and visualized using a virtual avatar model of the tracked person. This paper presents an initial novel experimental study of using a commercially available deep-learning body tracking system based on an RGB-D sensor for virtual human model reconstruction. We carried out our study in an indoor environment under natural conditions. To study the performance of the tracker, we experimentally study the output of the tracker which is in the form of a skeleton (stick-figure) data structure under several conditions in order to observe its robustness and identify its drawbacks. In addition, we show and study how the generic model can be mapped for virtual human model reconstruction. It was found that the deep-learning tracking approach using an RGB-D sensor is susceptible to various environmental factors which result in the absence and presence of noise in estimating the resulting locations of skeleton joints. This as a result introduces challenges for further virtual model reconstruction. We present an initial approach for compensating for such noise resulting in a better temporal variation of the joint coordinates in the captured skeleton data. We explored how the extracted joint position information of the skeleton data can be used as a part of the virtual human model reconstruction.
format Online
Article
Text
id pubmed-8460365
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-84603652021-09-24 Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction Payandeh, Shahram Wael, Jeffrey Int J Telemed Appl Research Article Tracking movements of the body in a natural living environment of a person is a challenging undertaking. Such tracking information can be used as a part of detecting any onsets of anomalies in movement patterns or as a part of a remote monitoring environment. The tracking information can be mapped and visualized using a virtual avatar model of the tracked person. This paper presents an initial novel experimental study of using a commercially available deep-learning body tracking system based on an RGB-D sensor for virtual human model reconstruction. We carried out our study in an indoor environment under natural conditions. To study the performance of the tracker, we experimentally study the output of the tracker which is in the form of a skeleton (stick-figure) data structure under several conditions in order to observe its robustness and identify its drawbacks. In addition, we show and study how the generic model can be mapped for virtual human model reconstruction. It was found that the deep-learning tracking approach using an RGB-D sensor is susceptible to various environmental factors which result in the absence and presence of noise in estimating the resulting locations of skeleton joints. This as a result introduces challenges for further virtual model reconstruction. We present an initial approach for compensating for such noise resulting in a better temporal variation of the joint coordinates in the captured skeleton data. We explored how the extracted joint position information of the skeleton data can be used as a part of the virtual human model reconstruction. Hindawi 2021-09-15 /pmc/articles/PMC8460365/ /pubmed/34567110 http://dx.doi.org/10.1155/2021/5551753 Text en Copyright © 2021 Shahram Payandeh and Jeffrey Wael. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Payandeh, Shahram
Wael, Jeffrey
Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
title Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
title_full Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
title_fullStr Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
title_full_unstemmed Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
title_short Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction
title_sort experimental study of a deep-learning rgb-d tracker for virtual remote human model reconstruction
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8460365/
https://www.ncbi.nlm.nih.gov/pubmed/34567110
http://dx.doi.org/10.1155/2021/5551753
work_keys_str_mv AT payandehshahram experimentalstudyofadeeplearningrgbdtrackerforvirtualremotehumanmodelreconstruction
AT waeljeffrey experimentalstudyofadeeplearningrgbdtrackerforvirtualremotehumanmodelreconstruction