Cargando…
A Novel Functional Electrical Stimulation-Induced Cycling Controller Using Reinforcement Learning to Optimize Online Muscle Activation Pattern
This study introduces a novel controller based on a Reinforcement Learning (RL) algorithm for real-time adaptation of the stimulation pattern during FES-cycling. Core to our approach is the introduction of an RL agent that interacts with the cycling environment and learns through trial and error how...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9741342/ https://www.ncbi.nlm.nih.gov/pubmed/36501826 http://dx.doi.org/10.3390/s22239126 |
_version_ | 1784848295836778496 |
---|---|
author | Coelho-Magalhães, Tiago Azevedo Coste, Christine Resende-Martins, Henrique |
author_facet | Coelho-Magalhães, Tiago Azevedo Coste, Christine Resende-Martins, Henrique |
author_sort | Coelho-Magalhães, Tiago |
collection | PubMed |
description | This study introduces a novel controller based on a Reinforcement Learning (RL) algorithm for real-time adaptation of the stimulation pattern during FES-cycling. Core to our approach is the introduction of an RL agent that interacts with the cycling environment and learns through trial and error how to modulate the electrical charge applied to the stimulated muscle groups according to a predefined policy and while tracking a reference cadence. Instead of a static stimulation pattern to be modified by a control law, we hypothesized that a non-stationary baseline set of parameters would better adjust the amount of injected electrical charge to the time-varying characteristics of the musculature. Overground FES-assisted cycling sessions were performed by a subject with spinal cord injury (SCI AIS-A, T8). For tracking a predefined pedaling cadence, two closed-loop control laws were simultaneously used to modulate the pulse intensity of the stimulation channels responsible for evoking the muscle contractions. First, a Proportional-Integral (PI) controller was used to control the current amplitude of the stimulation channels over an initial parameter setting with predefined pulse amplitude, width and fixed frequency parameters. In parallel, an RL algorithm with a decayed-epsilon-greedy strategy was implemented to randomly explore nine different variations of pulse amplitude and width parameters over the same stimulation setting, aiming to adjust the injected electrical charge according to a predefined policy. The performance of this global control strategy was evaluated in two different RL settings and explored in two different cycling scenarios. The participant was able to pedal overground for distances over 3.5 km, and the results evidenced the RL agent learned to modify the stimulation pattern according to the predefined policy and was simultaneously able to track a predefined pedaling cadence. Despite the simplicity of our approach and the existence of more sophisticated RL algorithms, our method can be used to reduce the time needed to define stimulation patterns. Our results suggest interesting research possibilities to be explored in the future to improve cycling performance since more efficient stimulation cost dynamics can be explored and implemented for the agent to learn. |
format | Online Article Text |
id | pubmed-9741342 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-97413422022-12-11 A Novel Functional Electrical Stimulation-Induced Cycling Controller Using Reinforcement Learning to Optimize Online Muscle Activation Pattern Coelho-Magalhães, Tiago Azevedo Coste, Christine Resende-Martins, Henrique Sensors (Basel) Article This study introduces a novel controller based on a Reinforcement Learning (RL) algorithm for real-time adaptation of the stimulation pattern during FES-cycling. Core to our approach is the introduction of an RL agent that interacts with the cycling environment and learns through trial and error how to modulate the electrical charge applied to the stimulated muscle groups according to a predefined policy and while tracking a reference cadence. Instead of a static stimulation pattern to be modified by a control law, we hypothesized that a non-stationary baseline set of parameters would better adjust the amount of injected electrical charge to the time-varying characteristics of the musculature. Overground FES-assisted cycling sessions were performed by a subject with spinal cord injury (SCI AIS-A, T8). For tracking a predefined pedaling cadence, two closed-loop control laws were simultaneously used to modulate the pulse intensity of the stimulation channels responsible for evoking the muscle contractions. First, a Proportional-Integral (PI) controller was used to control the current amplitude of the stimulation channels over an initial parameter setting with predefined pulse amplitude, width and fixed frequency parameters. In parallel, an RL algorithm with a decayed-epsilon-greedy strategy was implemented to randomly explore nine different variations of pulse amplitude and width parameters over the same stimulation setting, aiming to adjust the injected electrical charge according to a predefined policy. The performance of this global control strategy was evaluated in two different RL settings and explored in two different cycling scenarios. The participant was able to pedal overground for distances over 3.5 km, and the results evidenced the RL agent learned to modify the stimulation pattern according to the predefined policy and was simultaneously able to track a predefined pedaling cadence. Despite the simplicity of our approach and the existence of more sophisticated RL algorithms, our method can be used to reduce the time needed to define stimulation patterns. Our results suggest interesting research possibilities to be explored in the future to improve cycling performance since more efficient stimulation cost dynamics can be explored and implemented for the agent to learn. MDPI 2022-11-24 /pmc/articles/PMC9741342/ /pubmed/36501826 http://dx.doi.org/10.3390/s22239126 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Coelho-Magalhães, Tiago Azevedo Coste, Christine Resende-Martins, Henrique A Novel Functional Electrical Stimulation-Induced Cycling Controller Using Reinforcement Learning to Optimize Online Muscle Activation Pattern |
title | A Novel Functional Electrical Stimulation-Induced Cycling Controller Using Reinforcement Learning to Optimize Online Muscle Activation Pattern |
title_full | A Novel Functional Electrical Stimulation-Induced Cycling Controller Using Reinforcement Learning to Optimize Online Muscle Activation Pattern |
title_fullStr | A Novel Functional Electrical Stimulation-Induced Cycling Controller Using Reinforcement Learning to Optimize Online Muscle Activation Pattern |
title_full_unstemmed | A Novel Functional Electrical Stimulation-Induced Cycling Controller Using Reinforcement Learning to Optimize Online Muscle Activation Pattern |
title_short | A Novel Functional Electrical Stimulation-Induced Cycling Controller Using Reinforcement Learning to Optimize Online Muscle Activation Pattern |
title_sort | novel functional electrical stimulation-induced cycling controller using reinforcement learning to optimize online muscle activation pattern |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9741342/ https://www.ncbi.nlm.nih.gov/pubmed/36501826 http://dx.doi.org/10.3390/s22239126 |
work_keys_str_mv | AT coelhomagalhaestiago anovelfunctionalelectricalstimulationinducedcyclingcontrollerusingreinforcementlearningtooptimizeonlinemuscleactivationpattern AT azevedocostechristine anovelfunctionalelectricalstimulationinducedcyclingcontrollerusingreinforcementlearningtooptimizeonlinemuscleactivationpattern AT resendemartinshenrique anovelfunctionalelectricalstimulationinducedcyclingcontrollerusingreinforcementlearningtooptimizeonlinemuscleactivationpattern AT coelhomagalhaestiago novelfunctionalelectricalstimulationinducedcyclingcontrollerusingreinforcementlearningtooptimizeonlinemuscleactivationpattern AT azevedocostechristine novelfunctionalelectricalstimulationinducedcyclingcontrollerusingreinforcementlearningtooptimizeonlinemuscleactivationpattern AT resendemartinshenrique novelfunctionalelectricalstimulationinducedcyclingcontrollerusingreinforcementlearningtooptimizeonlinemuscleactivationpattern |