Cargando…

Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling

Locomotor impairment is a highly prevalent and significant source of disability and significantly impacts the quality of life of a large portion of the population. Despite decades of research on human locomotion, challenges remain in simulating human movement to study the features of musculoskeletal...

Descripción completa

Detalles Bibliográficos
Autores principales: Korivand, Soroush, Jalili, Nader, Gong, Jiaqi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007537/
https://www.ncbi.nlm.nih.gov/pubmed/36904901
http://dx.doi.org/10.3390/s23052698
_version_ 1784905546144415744
author Korivand, Soroush
Jalili, Nader
Gong, Jiaqi
author_facet Korivand, Soroush
Jalili, Nader
Gong, Jiaqi
author_sort Korivand, Soroush
collection PubMed
description Locomotor impairment is a highly prevalent and significant source of disability and significantly impacts the quality of life of a large portion of the population. Despite decades of research on human locomotion, challenges remain in simulating human movement to study the features of musculoskeletal drivers and clinical conditions. Most recent efforts to utilize reinforcement learning (RL) techniques are promising in the simulation of human locomotion and reveal musculoskeletal drives. However, these simulations often fail to mimic natural human locomotion because most reinforcement strategies have yet to consider any reference data regarding human movement. To address these challenges, in this study, we designed a reward function based on the trajectory optimization rewards (TOR) and bio-inspired rewards, which includes the rewards obtained from reference motion data captured by a single Inertial Moment Unit (IMU) sensor. The sensor was equipped on the participants’ pelvis to capture reference motion data. We also adapted the reward function by leveraging previous research on walking simulations for TOR. The experimental results showed that the simulated agents with the modified reward function performed better in mimicking the collected IMU data from participants, which means that the simulated human locomotion was more realistic. As a bio-inspired defined cost, IMU data enhanced the agent’s capacity to converge during the training process. As a result, the models’ convergence was faster than those developed without reference motion data. Consequently, human locomotion can be simulated more quickly and in a broader range of environments, with a better simulation performance.
format Online
Article
Text
id pubmed-10007537
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100075372023-03-12 Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling Korivand, Soroush Jalili, Nader Gong, Jiaqi Sensors (Basel) Article Locomotor impairment is a highly prevalent and significant source of disability and significantly impacts the quality of life of a large portion of the population. Despite decades of research on human locomotion, challenges remain in simulating human movement to study the features of musculoskeletal drivers and clinical conditions. Most recent efforts to utilize reinforcement learning (RL) techniques are promising in the simulation of human locomotion and reveal musculoskeletal drives. However, these simulations often fail to mimic natural human locomotion because most reinforcement strategies have yet to consider any reference data regarding human movement. To address these challenges, in this study, we designed a reward function based on the trajectory optimization rewards (TOR) and bio-inspired rewards, which includes the rewards obtained from reference motion data captured by a single Inertial Moment Unit (IMU) sensor. The sensor was equipped on the participants’ pelvis to capture reference motion data. We also adapted the reward function by leveraging previous research on walking simulations for TOR. The experimental results showed that the simulated agents with the modified reward function performed better in mimicking the collected IMU data from participants, which means that the simulated human locomotion was more realistic. As a bio-inspired defined cost, IMU data enhanced the agent’s capacity to converge during the training process. As a result, the models’ convergence was faster than those developed without reference motion data. Consequently, human locomotion can be simulated more quickly and in a broader range of environments, with a better simulation performance. MDPI 2023-03-01 /pmc/articles/PMC10007537/ /pubmed/36904901 http://dx.doi.org/10.3390/s23052698 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Korivand, Soroush
Jalili, Nader
Gong, Jiaqi
Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling
title Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling
title_full Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling
title_fullStr Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling
title_full_unstemmed Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling
title_short Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling
title_sort inertia-constrained reinforcement learning to enhance human motor control modeling
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007537/
https://www.ncbi.nlm.nih.gov/pubmed/36904901
http://dx.doi.org/10.3390/s23052698
work_keys_str_mv AT korivandsoroush inertiaconstrainedreinforcementlearningtoenhancehumanmotorcontrolmodeling
AT jalilinader inertiaconstrainedreinforcementlearningtoenhancehumanmotorcontrolmodeling
AT gongjiaqi inertiaconstrainedreinforcementlearningtoenhancehumanmotorcontrolmodeling