Cargando…

Development of Smartphone Application for Markerless Three-Dimensional Motion Capture Based on Deep Learning Model

To quantitatively assess pathological gait, we developed a novel smartphone application for full-body human motion tracking in real time from markerless video-based images using a smartphone monocular camera and deep learning. As training data for deep learning, the original three-dimensional (3D) d...

Descripción completa

Detalles Bibliográficos
Autores principales: Aoyagi, Yukihiko, Yamada, Shigeki, Ueda, Shigeo, Iseki, Chifumi, Kondo, Toshiyuki, Mori, Keisuke, Kobayashi, Yoshiyuki, Fukami, Tadanori, Hoshimaru, Minoru, Ishikawa, Masatsune, Ohta, Yasuyuki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9322512/
https://www.ncbi.nlm.nih.gov/pubmed/35890959
http://dx.doi.org/10.3390/s22145282
_version_ 1784756323140763648
author Aoyagi, Yukihiko
Yamada, Shigeki
Ueda, Shigeo
Iseki, Chifumi
Kondo, Toshiyuki
Mori, Keisuke
Kobayashi, Yoshiyuki
Fukami, Tadanori
Hoshimaru, Minoru
Ishikawa, Masatsune
Ohta, Yasuyuki
author_facet Aoyagi, Yukihiko
Yamada, Shigeki
Ueda, Shigeo
Iseki, Chifumi
Kondo, Toshiyuki
Mori, Keisuke
Kobayashi, Yoshiyuki
Fukami, Tadanori
Hoshimaru, Minoru
Ishikawa, Masatsune
Ohta, Yasuyuki
author_sort Aoyagi, Yukihiko
collection PubMed
description To quantitatively assess pathological gait, we developed a novel smartphone application for full-body human motion tracking in real time from markerless video-based images using a smartphone monocular camera and deep learning. As training data for deep learning, the original three-dimensional (3D) dataset comprising more than 1 million captured images from the 3D motion of 90 humanoid characters and the two-dimensional dataset of COCO 2017 were prepared. The 3D heatmap offset data consisting of 28 × 28 × 28 blocks with three red–green–blue colors at the 24 key points of the entire body motion were learned using the convolutional neural network, modified ResNet34. At each key point, the hottest spot deviating from the center of the cell was learned using the tanh function. Our new iOS application could detect the relative tri-axial coordinates of the 24 whole-body key points centered on the navel in real time without any markers for motion capture. By using the relative coordinates, the 3D angles of the neck, lumbar, bilateral hip, knee, and ankle joints were estimated. Any human motion could be quantitatively and easily assessed using a new smartphone application named Three-Dimensional Pose Tracker for Gait Test (TDPT-GT) without any body markers or multipoint cameras.
format Online
Article
Text
id pubmed-9322512
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-93225122022-07-27 Development of Smartphone Application for Markerless Three-Dimensional Motion Capture Based on Deep Learning Model Aoyagi, Yukihiko Yamada, Shigeki Ueda, Shigeo Iseki, Chifumi Kondo, Toshiyuki Mori, Keisuke Kobayashi, Yoshiyuki Fukami, Tadanori Hoshimaru, Minoru Ishikawa, Masatsune Ohta, Yasuyuki Sensors (Basel) Communication To quantitatively assess pathological gait, we developed a novel smartphone application for full-body human motion tracking in real time from markerless video-based images using a smartphone monocular camera and deep learning. As training data for deep learning, the original three-dimensional (3D) dataset comprising more than 1 million captured images from the 3D motion of 90 humanoid characters and the two-dimensional dataset of COCO 2017 were prepared. The 3D heatmap offset data consisting of 28 × 28 × 28 blocks with three red–green–blue colors at the 24 key points of the entire body motion were learned using the convolutional neural network, modified ResNet34. At each key point, the hottest spot deviating from the center of the cell was learned using the tanh function. Our new iOS application could detect the relative tri-axial coordinates of the 24 whole-body key points centered on the navel in real time without any markers for motion capture. By using the relative coordinates, the 3D angles of the neck, lumbar, bilateral hip, knee, and ankle joints were estimated. Any human motion could be quantitatively and easily assessed using a new smartphone application named Three-Dimensional Pose Tracker for Gait Test (TDPT-GT) without any body markers or multipoint cameras. MDPI 2022-07-14 /pmc/articles/PMC9322512/ /pubmed/35890959 http://dx.doi.org/10.3390/s22145282 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Communication
Aoyagi, Yukihiko
Yamada, Shigeki
Ueda, Shigeo
Iseki, Chifumi
Kondo, Toshiyuki
Mori, Keisuke
Kobayashi, Yoshiyuki
Fukami, Tadanori
Hoshimaru, Minoru
Ishikawa, Masatsune
Ohta, Yasuyuki
Development of Smartphone Application for Markerless Three-Dimensional Motion Capture Based on Deep Learning Model
title Development of Smartphone Application for Markerless Three-Dimensional Motion Capture Based on Deep Learning Model
title_full Development of Smartphone Application for Markerless Three-Dimensional Motion Capture Based on Deep Learning Model
title_fullStr Development of Smartphone Application for Markerless Three-Dimensional Motion Capture Based on Deep Learning Model
title_full_unstemmed Development of Smartphone Application for Markerless Three-Dimensional Motion Capture Based on Deep Learning Model
title_short Development of Smartphone Application for Markerless Three-Dimensional Motion Capture Based on Deep Learning Model
title_sort development of smartphone application for markerless three-dimensional motion capture based on deep learning model
topic Communication
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9322512/
https://www.ncbi.nlm.nih.gov/pubmed/35890959
http://dx.doi.org/10.3390/s22145282
work_keys_str_mv AT aoyagiyukihiko developmentofsmartphoneapplicationformarkerlessthreedimensionalmotioncapturebasedondeeplearningmodel
AT yamadashigeki developmentofsmartphoneapplicationformarkerlessthreedimensionalmotioncapturebasedondeeplearningmodel
AT uedashigeo developmentofsmartphoneapplicationformarkerlessthreedimensionalmotioncapturebasedondeeplearningmodel
AT isekichifumi developmentofsmartphoneapplicationformarkerlessthreedimensionalmotioncapturebasedondeeplearningmodel
AT kondotoshiyuki developmentofsmartphoneapplicationformarkerlessthreedimensionalmotioncapturebasedondeeplearningmodel
AT morikeisuke developmentofsmartphoneapplicationformarkerlessthreedimensionalmotioncapturebasedondeeplearningmodel
AT kobayashiyoshiyuki developmentofsmartphoneapplicationformarkerlessthreedimensionalmotioncapturebasedondeeplearningmodel
AT fukamitadanori developmentofsmartphoneapplicationformarkerlessthreedimensionalmotioncapturebasedondeeplearningmodel
AT hoshimaruminoru developmentofsmartphoneapplicationformarkerlessthreedimensionalmotioncapturebasedondeeplearningmodel
AT ishikawamasatsune developmentofsmartphoneapplicationformarkerlessthreedimensionalmotioncapturebasedondeeplearningmodel
AT ohtayasuyuki developmentofsmartphoneapplicationformarkerlessthreedimensionalmotioncapturebasedondeeplearningmodel