Cargando…
Control of neural systems at multiple scales using model-free, deep reinforcement learning
Recent improvements in hardware and data collection have lowered the barrier to practical neural control. Most of the current contributions to the field have focus on model-based control, however, models of neural systems are quite complex and difficult to design. To circumvent these issues, we adap...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6048054/ https://www.ncbi.nlm.nih.gov/pubmed/30013195 http://dx.doi.org/10.1038/s41598-018-29134-x |
_version_ | 1783340033175977984 |
---|---|
author | Mitchell, B. A. Petzold, L. R. |
author_facet | Mitchell, B. A. Petzold, L. R. |
author_sort | Mitchell, B. A. |
collection | PubMed |
description | Recent improvements in hardware and data collection have lowered the barrier to practical neural control. Most of the current contributions to the field have focus on model-based control, however, models of neural systems are quite complex and difficult to design. To circumvent these issues, we adapt a model-free method from the reinforcement learning literature, Deep Deterministic Policy Gradients (DDPG). Model-free reinforcement learning presents an attractive framework because of the flexibility it offers, allowing the user to avoid modeling system dynamics. We make use of this feature by applying DDPG to models of low-level and high-level neural dynamics. We show that while model-free, DDPG is able to solve more difficult problems than can be solved by current methods. These problems include the induction of global synchrony by entrainment of weakly coupled oscillators and the control of trajectories through a latent phase space of an underactuated network of neurons. While this work has been performed on simulated systems, it suggests that advances in modern reinforcement learning may enable the solution of fundamental problems in neural control and movement towards more complex objectives in real systems. |
format | Online Article Text |
id | pubmed-6048054 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-60480542018-07-19 Control of neural systems at multiple scales using model-free, deep reinforcement learning Mitchell, B. A. Petzold, L. R. Sci Rep Article Recent improvements in hardware and data collection have lowered the barrier to practical neural control. Most of the current contributions to the field have focus on model-based control, however, models of neural systems are quite complex and difficult to design. To circumvent these issues, we adapt a model-free method from the reinforcement learning literature, Deep Deterministic Policy Gradients (DDPG). Model-free reinforcement learning presents an attractive framework because of the flexibility it offers, allowing the user to avoid modeling system dynamics. We make use of this feature by applying DDPG to models of low-level and high-level neural dynamics. We show that while model-free, DDPG is able to solve more difficult problems than can be solved by current methods. These problems include the induction of global synchrony by entrainment of weakly coupled oscillators and the control of trajectories through a latent phase space of an underactuated network of neurons. While this work has been performed on simulated systems, it suggests that advances in modern reinforcement learning may enable the solution of fundamental problems in neural control and movement towards more complex objectives in real systems. Nature Publishing Group UK 2018-07-16 /pmc/articles/PMC6048054/ /pubmed/30013195 http://dx.doi.org/10.1038/s41598-018-29134-x Text en © The Author(s) 2018 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. |
spellingShingle | Article Mitchell, B. A. Petzold, L. R. Control of neural systems at multiple scales using model-free, deep reinforcement learning |
title | Control of neural systems at multiple scales using model-free, deep reinforcement learning |
title_full | Control of neural systems at multiple scales using model-free, deep reinforcement learning |
title_fullStr | Control of neural systems at multiple scales using model-free, deep reinforcement learning |
title_full_unstemmed | Control of neural systems at multiple scales using model-free, deep reinforcement learning |
title_short | Control of neural systems at multiple scales using model-free, deep reinforcement learning |
title_sort | control of neural systems at multiple scales using model-free, deep reinforcement learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6048054/ https://www.ncbi.nlm.nih.gov/pubmed/30013195 http://dx.doi.org/10.1038/s41598-018-29134-x |
work_keys_str_mv | AT mitchellba controlofneuralsystemsatmultiplescalesusingmodelfreedeepreinforcementlearning AT petzoldlr controlofneuralsystemsatmultiplescalesusingmodelfreedeepreinforcementlearning |