Cargando…
Optimum trajectory learning in musculoskeletal systems with model predictive control and deep reinforcement learning
From the computational point of view, musculoskeletal control is the problem of controlling high degrees of freedom and dynamic multi-body system that is driven by redundant muscle units. A critical challenge in the control perspective of skeletal joints with antagonistic muscle pairs is finding met...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Berlin Heidelberg
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9691497/ https://www.ncbi.nlm.nih.gov/pubmed/35951117 http://dx.doi.org/10.1007/s00422-022-00940-x |
_version_ | 1784837058233106432 |
---|---|
author | Denizdurduran, Berat Markram, Henry Gewaltig, Marc-Oliver |
author_facet | Denizdurduran, Berat Markram, Henry Gewaltig, Marc-Oliver |
author_sort | Denizdurduran, Berat |
collection | PubMed |
description | From the computational point of view, musculoskeletal control is the problem of controlling high degrees of freedom and dynamic multi-body system that is driven by redundant muscle units. A critical challenge in the control perspective of skeletal joints with antagonistic muscle pairs is finding methods robust to address this ill-posed nonlinear problem. To address this computational problem, we implemented a twofold optimization and learning framework to be specialized in addressing the redundancies in the muscle control . In the first part, we used model predictive control to obtain energy efficient skeletal trajectories to mimick human movements. The second part is to use deep reinforcement learning to obtain a sequence of stimulus to be given to muscles in order to obtain the skeletal trajectories with muscle control. We observed that the desired stimulus to muscles is only efficiently constructed by integrating the state and control input in a closed-loop setting as it resembles the proprioceptive integration in the spinal cord circuits. In this work, we showed how a variety of different reference trajectories can be obtained with optimal control and how these reference trajectories are mapped to the musculoskeletal control with deep reinforcement learning. Starting from the characteristics of human arm movement to obstacle avoidance experiment, our simulation results confirm the capabilities of our optimization and learning framework for a variety of dynamic movement trajectories. In summary, the proposed framework is offering a pipeline to complement the lack of experiments to record human motion-capture data as well as study the activation range of muscles to replicate the specific trajectory of interest. Using the trajectories from optimal control as a reference signal for reinforcement learning implementation has allowed us to acquire optimum and human-like behaviour of the musculoskeletal system which provides a framework to study human movement in-silico experiments. The present framework can also allow studying upper-arm rehabilitation with assistive robots given that one can use healthy subject movement recordings as reference to work on the control architecture of assistive robotics in order to compensate behavioural deficiencies. Hence, the framework opens to possibility of replicating or complementing labour-intensive, time-consuming and costly experiments with human subjects in the field of movement studies and digital twin of rehabilitation. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00422-022-00940-x. |
format | Online Article Text |
id | pubmed-9691497 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer Berlin Heidelberg |
record_format | MEDLINE/PubMed |
spelling | pubmed-96914972022-11-26 Optimum trajectory learning in musculoskeletal systems with model predictive control and deep reinforcement learning Denizdurduran, Berat Markram, Henry Gewaltig, Marc-Oliver Biol Cybern Original Article From the computational point of view, musculoskeletal control is the problem of controlling high degrees of freedom and dynamic multi-body system that is driven by redundant muscle units. A critical challenge in the control perspective of skeletal joints with antagonistic muscle pairs is finding methods robust to address this ill-posed nonlinear problem. To address this computational problem, we implemented a twofold optimization and learning framework to be specialized in addressing the redundancies in the muscle control . In the first part, we used model predictive control to obtain energy efficient skeletal trajectories to mimick human movements. The second part is to use deep reinforcement learning to obtain a sequence of stimulus to be given to muscles in order to obtain the skeletal trajectories with muscle control. We observed that the desired stimulus to muscles is only efficiently constructed by integrating the state and control input in a closed-loop setting as it resembles the proprioceptive integration in the spinal cord circuits. In this work, we showed how a variety of different reference trajectories can be obtained with optimal control and how these reference trajectories are mapped to the musculoskeletal control with deep reinforcement learning. Starting from the characteristics of human arm movement to obstacle avoidance experiment, our simulation results confirm the capabilities of our optimization and learning framework for a variety of dynamic movement trajectories. In summary, the proposed framework is offering a pipeline to complement the lack of experiments to record human motion-capture data as well as study the activation range of muscles to replicate the specific trajectory of interest. Using the trajectories from optimal control as a reference signal for reinforcement learning implementation has allowed us to acquire optimum and human-like behaviour of the musculoskeletal system which provides a framework to study human movement in-silico experiments. The present framework can also allow studying upper-arm rehabilitation with assistive robots given that one can use healthy subject movement recordings as reference to work on the control architecture of assistive robotics in order to compensate behavioural deficiencies. Hence, the framework opens to possibility of replicating or complementing labour-intensive, time-consuming and costly experiments with human subjects in the field of movement studies and digital twin of rehabilitation. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00422-022-00940-x. Springer Berlin Heidelberg 2022-08-11 2022 /pmc/articles/PMC9691497/ /pubmed/35951117 http://dx.doi.org/10.1007/s00422-022-00940-x Text en © The Author(s) 2022, corrected publication 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Original Article Denizdurduran, Berat Markram, Henry Gewaltig, Marc-Oliver Optimum trajectory learning in musculoskeletal systems with model predictive control and deep reinforcement learning |
title | Optimum trajectory learning in musculoskeletal systems with model predictive control and deep reinforcement learning |
title_full | Optimum trajectory learning in musculoskeletal systems with model predictive control and deep reinforcement learning |
title_fullStr | Optimum trajectory learning in musculoskeletal systems with model predictive control and deep reinforcement learning |
title_full_unstemmed | Optimum trajectory learning in musculoskeletal systems with model predictive control and deep reinforcement learning |
title_short | Optimum trajectory learning in musculoskeletal systems with model predictive control and deep reinforcement learning |
title_sort | optimum trajectory learning in musculoskeletal systems with model predictive control and deep reinforcement learning |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9691497/ https://www.ncbi.nlm.nih.gov/pubmed/35951117 http://dx.doi.org/10.1007/s00422-022-00940-x |
work_keys_str_mv | AT denizdurduranberat optimumtrajectorylearninginmusculoskeletalsystemswithmodelpredictivecontrolanddeepreinforcementlearning AT markramhenry optimumtrajectorylearninginmusculoskeletalsystemswithmodelpredictivecontrolanddeepreinforcementlearning AT gewaltigmarcoliver optimumtrajectorylearninginmusculoskeletalsystemswithmodelpredictivecontrolanddeepreinforcementlearning |