Cargando…

Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot

This study introduces a novel convolutional neural network (CNN) architecture, encompassing both single and multi-head designs, developed to identify a user’s locomotion activity while using a wearable lower limb robot. Our research involved 500 healthy adult participants in an activities of daily l...

Descripción completa

Detalles Bibliográficos
Autores principales: Son, Chang-Sik, Kang, Won-Seok
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10525937/
https://www.ncbi.nlm.nih.gov/pubmed/37760184
http://dx.doi.org/10.3390/bioengineering10091082
_version_ 1785110903069343744
author Son, Chang-Sik
Kang, Won-Seok
author_facet Son, Chang-Sik
Kang, Won-Seok
author_sort Son, Chang-Sik
collection PubMed
description This study introduces a novel convolutional neural network (CNN) architecture, encompassing both single and multi-head designs, developed to identify a user’s locomotion activity while using a wearable lower limb robot. Our research involved 500 healthy adult participants in an activities of daily living (ADL) space, conducted from 1 September to 30 November 2022. We collected prospective data to identify five locomotion activities (level ground walking, stair ascent/descent, and ramp ascent/descent) across three terrains: flat ground, staircase, and ramp. To evaluate the predictive capabilities of the proposed CNN architectures, we compared its performance with three other models: one CNN and two hybrid models (CNN-LSTM and LSTM-CNN). Experiments were conducted using multivariate signals of various types obtained from electromyograms (EMGs) and the wearable robot. Our results reveal that the deeper CNN architecture significantly surpasses the performance of the three competing models. The proposed model, leveraging encoder data such as hip angles and velocities, along with postural signals such as roll, pitch, and yaw from the wearable lower limb robot, achieved superior performance with an inference speed of 1.14 s. Specifically, the F-measure performance of the proposed model reached 96.17%, compared to 90.68% for DDLMI, 94.41% for DeepConvLSTM, and 95.57% for LSTM-CNN, respectively.
format Online
Article
Text
id pubmed-10525937
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-105259372023-09-28 Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot Son, Chang-Sik Kang, Won-Seok Bioengineering (Basel) Article This study introduces a novel convolutional neural network (CNN) architecture, encompassing both single and multi-head designs, developed to identify a user’s locomotion activity while using a wearable lower limb robot. Our research involved 500 healthy adult participants in an activities of daily living (ADL) space, conducted from 1 September to 30 November 2022. We collected prospective data to identify five locomotion activities (level ground walking, stair ascent/descent, and ramp ascent/descent) across three terrains: flat ground, staircase, and ramp. To evaluate the predictive capabilities of the proposed CNN architectures, we compared its performance with three other models: one CNN and two hybrid models (CNN-LSTM and LSTM-CNN). Experiments were conducted using multivariate signals of various types obtained from electromyograms (EMGs) and the wearable robot. Our results reveal that the deeper CNN architecture significantly surpasses the performance of the three competing models. The proposed model, leveraging encoder data such as hip angles and velocities, along with postural signals such as roll, pitch, and yaw from the wearable lower limb robot, achieved superior performance with an inference speed of 1.14 s. Specifically, the F-measure performance of the proposed model reached 96.17%, compared to 90.68% for DDLMI, 94.41% for DeepConvLSTM, and 95.57% for LSTM-CNN, respectively. MDPI 2023-09-13 /pmc/articles/PMC10525937/ /pubmed/37760184 http://dx.doi.org/10.3390/bioengineering10091082 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Son, Chang-Sik
Kang, Won-Seok
Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
title Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
title_full Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
title_fullStr Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
title_full_unstemmed Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
title_short Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
title_sort multivariate cnn model for human locomotion activity recognition with a wearable exoskeleton robot
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10525937/
https://www.ncbi.nlm.nih.gov/pubmed/37760184
http://dx.doi.org/10.3390/bioengineering10091082
work_keys_str_mv AT sonchangsik multivariatecnnmodelforhumanlocomotionactivityrecognitionwithawearableexoskeletonrobot
AT kangwonseok multivariatecnnmodelforhumanlocomotionactivityrecognitionwithawearableexoskeletonrobot