Cargando…

Deep Learning-Based Subtask Segmentation of Timed Up-and-Go Test Using RGB-D Cameras

The timed up-and-go (TUG) test is an efficient way to evaluate an individual’s basic functional mobility, such as standing up, walking, turning around, and sitting back. The total completion time of the TUG test is a metric indicating an individual’s overall mobility. Moreover, the fine-grained cons...

Descripción completa

Detalles Bibliográficos
Autores principales: Choi, Yoonjeong, Bae, Yoosung, Cha, Baekdong, Ryu, Jeha
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9459743/
https://www.ncbi.nlm.nih.gov/pubmed/36080782
http://dx.doi.org/10.3390/s22176323
_version_ 1784786584992743424
author Choi, Yoonjeong
Bae, Yoosung
Cha, Baekdong
Ryu, Jeha
author_facet Choi, Yoonjeong
Bae, Yoosung
Cha, Baekdong
Ryu, Jeha
author_sort Choi, Yoonjeong
collection PubMed
description The timed up-and-go (TUG) test is an efficient way to evaluate an individual’s basic functional mobility, such as standing up, walking, turning around, and sitting back. The total completion time of the TUG test is a metric indicating an individual’s overall mobility. Moreover, the fine-grained consumption time of the individual subtasks in the TUG test may provide important clinical information, such as elapsed time and speed of each TUG subtask, which may not only assist professionals in clinical interventions but also distinguish the functional recovery of patients. To perform more accurate, efficient, robust, and objective tests, this paper proposes a novel deep learning-based subtask segmentation of the TUG test using a dilated temporal convolutional network with a single RGB-D camera. Evaluation with three different subject groups (healthy young, healthy adult, stroke patients) showed that the proposed method demonstrated better generality and achieved a significantly higher and more robust performance (healthy young = 95.458%, healthy adult = 94.525%, stroke = 93.578%) than the existing rule-based and artificial neural network-based subtask segmentation methods. Additionally, the results indicated that the input from the pelvis alone achieved the best accuracy among many other single inputs or combinations of inputs, which allows a real-time inference (approximately 15 Hz) in edge devices, such as smartphones.
format Online
Article
Text
id pubmed-9459743
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-94597432022-09-10 Deep Learning-Based Subtask Segmentation of Timed Up-and-Go Test Using RGB-D Cameras Choi, Yoonjeong Bae, Yoosung Cha, Baekdong Ryu, Jeha Sensors (Basel) Article The timed up-and-go (TUG) test is an efficient way to evaluate an individual’s basic functional mobility, such as standing up, walking, turning around, and sitting back. The total completion time of the TUG test is a metric indicating an individual’s overall mobility. Moreover, the fine-grained consumption time of the individual subtasks in the TUG test may provide important clinical information, such as elapsed time and speed of each TUG subtask, which may not only assist professionals in clinical interventions but also distinguish the functional recovery of patients. To perform more accurate, efficient, robust, and objective tests, this paper proposes a novel deep learning-based subtask segmentation of the TUG test using a dilated temporal convolutional network with a single RGB-D camera. Evaluation with three different subject groups (healthy young, healthy adult, stroke patients) showed that the proposed method demonstrated better generality and achieved a significantly higher and more robust performance (healthy young = 95.458%, healthy adult = 94.525%, stroke = 93.578%) than the existing rule-based and artificial neural network-based subtask segmentation methods. Additionally, the results indicated that the input from the pelvis alone achieved the best accuracy among many other single inputs or combinations of inputs, which allows a real-time inference (approximately 15 Hz) in edge devices, such as smartphones. MDPI 2022-08-23 /pmc/articles/PMC9459743/ /pubmed/36080782 http://dx.doi.org/10.3390/s22176323 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Choi, Yoonjeong
Bae, Yoosung
Cha, Baekdong
Ryu, Jeha
Deep Learning-Based Subtask Segmentation of Timed Up-and-Go Test Using RGB-D Cameras
title Deep Learning-Based Subtask Segmentation of Timed Up-and-Go Test Using RGB-D Cameras
title_full Deep Learning-Based Subtask Segmentation of Timed Up-and-Go Test Using RGB-D Cameras
title_fullStr Deep Learning-Based Subtask Segmentation of Timed Up-and-Go Test Using RGB-D Cameras
title_full_unstemmed Deep Learning-Based Subtask Segmentation of Timed Up-and-Go Test Using RGB-D Cameras
title_short Deep Learning-Based Subtask Segmentation of Timed Up-and-Go Test Using RGB-D Cameras
title_sort deep learning-based subtask segmentation of timed up-and-go test using rgb-d cameras
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9459743/
https://www.ncbi.nlm.nih.gov/pubmed/36080782
http://dx.doi.org/10.3390/s22176323
work_keys_str_mv AT choiyoonjeong deeplearningbasedsubtasksegmentationoftimedupandgotestusingrgbdcameras
AT baeyoosung deeplearningbasedsubtasksegmentationoftimedupandgotestusingrgbdcameras
AT chabaekdong deeplearningbasedsubtasksegmentationoftimedupandgotestusingrgbdcameras
AT ryujeha deeplearningbasedsubtasksegmentationoftimedupandgotestusingrgbdcameras