Cargando…

Neural Networks With Motivation

Animals rely on internal motivational states to make decisions. The role of motivational salience in decision making is in early stages of mathematical understanding. Here, we propose a reinforcement learning framework that relies on neural networks to learn optimal ongoing behavior for dynamically...

Descripción completa

Detalles Bibliográficos
Autores principales: Shuvaev, Sergey A., Tran, Ngoc B., Stephenson-Jones, Marcus, Li, Bo, Koulakov, Alexei A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7848953/
https://www.ncbi.nlm.nih.gov/pubmed/33536879
http://dx.doi.org/10.3389/fnsys.2020.609316
_version_ 1783645227435687936
author Shuvaev, Sergey A.
Tran, Ngoc B.
Stephenson-Jones, Marcus
Li, Bo
Koulakov, Alexei A.
author_facet Shuvaev, Sergey A.
Tran, Ngoc B.
Stephenson-Jones, Marcus
Li, Bo
Koulakov, Alexei A.
author_sort Shuvaev, Sergey A.
collection PubMed
description Animals rely on internal motivational states to make decisions. The role of motivational salience in decision making is in early stages of mathematical understanding. Here, we propose a reinforcement learning framework that relies on neural networks to learn optimal ongoing behavior for dynamically changing motivation values. First, we show that neural networks implementing Q-learning with motivational salience can navigate in environment with dynamic rewards without adjustments in synaptic strengths when the needs of an agent shift. In this setting, our networks may display elements of addictive behaviors. Second, we use a similar framework in hierarchical manager-agent system to implement a reinforcement learning algorithm with motivation that both infers motivational states and behaves. Finally, we show that, when trained in the Pavlovian conditioning setting, the responses of the neurons in our model resemble previously published neuronal recordings in the ventral pallidum, a basal ganglia structure involved in motivated behaviors. We conclude that motivation allows Q-learning networks to quickly adapt their behavior to conditions when expected reward is modulated by agent’s dynamic needs. Our approach addresses the algorithmic rationale of motivation and makes a step toward better interpretability of behavioral data via inference of motivational dynamics in the brain.
format Online
Article
Text
id pubmed-7848953
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78489532021-02-02 Neural Networks With Motivation Shuvaev, Sergey A. Tran, Ngoc B. Stephenson-Jones, Marcus Li, Bo Koulakov, Alexei A. Front Syst Neurosci Neuroscience Animals rely on internal motivational states to make decisions. The role of motivational salience in decision making is in early stages of mathematical understanding. Here, we propose a reinforcement learning framework that relies on neural networks to learn optimal ongoing behavior for dynamically changing motivation values. First, we show that neural networks implementing Q-learning with motivational salience can navigate in environment with dynamic rewards without adjustments in synaptic strengths when the needs of an agent shift. In this setting, our networks may display elements of addictive behaviors. Second, we use a similar framework in hierarchical manager-agent system to implement a reinforcement learning algorithm with motivation that both infers motivational states and behaves. Finally, we show that, when trained in the Pavlovian conditioning setting, the responses of the neurons in our model resemble previously published neuronal recordings in the ventral pallidum, a basal ganglia structure involved in motivated behaviors. We conclude that motivation allows Q-learning networks to quickly adapt their behavior to conditions when expected reward is modulated by agent’s dynamic needs. Our approach addresses the algorithmic rationale of motivation and makes a step toward better interpretability of behavioral data via inference of motivational dynamics in the brain. Frontiers Media S.A. 2021-01-11 /pmc/articles/PMC7848953/ /pubmed/33536879 http://dx.doi.org/10.3389/fnsys.2020.609316 Text en Copyright © 2021 Shuvaev, Tran, Stephenson-Jones, Li and Koulakov. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Shuvaev, Sergey A.
Tran, Ngoc B.
Stephenson-Jones, Marcus
Li, Bo
Koulakov, Alexei A.
Neural Networks With Motivation
title Neural Networks With Motivation
title_full Neural Networks With Motivation
title_fullStr Neural Networks With Motivation
title_full_unstemmed Neural Networks With Motivation
title_short Neural Networks With Motivation
title_sort neural networks with motivation
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7848953/
https://www.ncbi.nlm.nih.gov/pubmed/33536879
http://dx.doi.org/10.3389/fnsys.2020.609316
work_keys_str_mv AT shuvaevsergeya neuralnetworkswithmotivation
AT tranngocb neuralnetworkswithmotivation
AT stephensonjonesmarcus neuralnetworkswithmotivation
AT libo neuralnetworkswithmotivation
AT koulakovalexeia neuralnetworkswithmotivation