Cargando…

Dynamical Motor Control Learned with Deep Deterministic Policy Gradient

Conventional models of motor control exploit the spatial representation of the controlled system to generate control commands. Typically, the control command is gained with the feedback state of a specific instant in time, which behaves like an optimal regulator or spatial filter to the feedback sta...

Descripción completa

Detalles Bibliográficos
Autores principales: Shi, Haibo, Sun, Yaoru, Li, Jie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5831918/
https://www.ncbi.nlm.nih.gov/pubmed/29666634
http://dx.doi.org/10.1155/2018/8535429
_version_ 1783303227946565632
author Shi, Haibo
Sun, Yaoru
Li, Jie
author_facet Shi, Haibo
Sun, Yaoru
Li, Jie
author_sort Shi, Haibo
collection PubMed
description Conventional models of motor control exploit the spatial representation of the controlled system to generate control commands. Typically, the control command is gained with the feedback state of a specific instant in time, which behaves like an optimal regulator or spatial filter to the feedback state. Yet, recent neuroscience studies found that the motor network may constitute an autonomous dynamical system and the temporal patterns of the control command can be contained in the dynamics of the motor network, that is, the dynamical system hypothesis (DSH). Inspired by these findings, here we propose a computational model that incorporates this neural mechanism, in which the control command could be unfolded from a dynamical controller whose initial state is specified with the task parameters. The model is trained in a trial-and-error manner in the framework of deep deterministic policy gradient (DDPG). The experimental results show that the dynamical controller successfully learns the control policy for arm reaching movements, while the analysis of the internal activities of the dynamical controller provides the computational evidence to the DSH of the neural coding in motor cortices.
format Online
Article
Text
id pubmed-5831918
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-58319182018-04-17 Dynamical Motor Control Learned with Deep Deterministic Policy Gradient Shi, Haibo Sun, Yaoru Li, Jie Comput Intell Neurosci Research Article Conventional models of motor control exploit the spatial representation of the controlled system to generate control commands. Typically, the control command is gained with the feedback state of a specific instant in time, which behaves like an optimal regulator or spatial filter to the feedback state. Yet, recent neuroscience studies found that the motor network may constitute an autonomous dynamical system and the temporal patterns of the control command can be contained in the dynamics of the motor network, that is, the dynamical system hypothesis (DSH). Inspired by these findings, here we propose a computational model that incorporates this neural mechanism, in which the control command could be unfolded from a dynamical controller whose initial state is specified with the task parameters. The model is trained in a trial-and-error manner in the framework of deep deterministic policy gradient (DDPG). The experimental results show that the dynamical controller successfully learns the control policy for arm reaching movements, while the analysis of the internal activities of the dynamical controller provides the computational evidence to the DSH of the neural coding in motor cortices. Hindawi 2018-01-31 /pmc/articles/PMC5831918/ /pubmed/29666634 http://dx.doi.org/10.1155/2018/8535429 Text en Copyright © 2018 Haibo Shi et al. https://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Shi, Haibo
Sun, Yaoru
Li, Jie
Dynamical Motor Control Learned with Deep Deterministic Policy Gradient
title Dynamical Motor Control Learned with Deep Deterministic Policy Gradient
title_full Dynamical Motor Control Learned with Deep Deterministic Policy Gradient
title_fullStr Dynamical Motor Control Learned with Deep Deterministic Policy Gradient
title_full_unstemmed Dynamical Motor Control Learned with Deep Deterministic Policy Gradient
title_short Dynamical Motor Control Learned with Deep Deterministic Policy Gradient
title_sort dynamical motor control learned with deep deterministic policy gradient
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5831918/
https://www.ncbi.nlm.nih.gov/pubmed/29666634
http://dx.doi.org/10.1155/2018/8535429
work_keys_str_mv AT shihaibo dynamicalmotorcontrollearnedwithdeepdeterministicpolicygradient
AT sunyaoru dynamicalmotorcontrollearnedwithdeepdeterministicpolicygradient
AT lijie dynamicalmotorcontrollearnedwithdeepdeterministicpolicygradient