Cargando…

Updating dopamine reward signals

Recent work has advanced our knowledge of phasic dopamine reward prediction error signals. The error signal is bidirectional, reflects well the higher order prediction error described by temporal difference learning models, is compatible with model-free and model-based reinforcement learning, report...

Descripción completa

Detalles Bibliográficos
Autor principal: Schultz, Wolfram
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Current Biology 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3866681/
https://www.ncbi.nlm.nih.gov/pubmed/23267662
http://dx.doi.org/10.1016/j.conb.2012.11.012
Descripción
Sumario:Recent work has advanced our knowledge of phasic dopamine reward prediction error signals. The error signal is bidirectional, reflects well the higher order prediction error described by temporal difference learning models, is compatible with model-free and model-based reinforcement learning, reports the subjective rather than physical reward value during temporal discounting and reflects subjective stimulus perception rather than physical stimulus aspects. Dopamine activations are primarily driven by reward, and to some extent risk, whereas punishment and salience have only limited activating effects when appropriate controls are respected. The signal is homogeneous in terms of time course but heterogeneous in many other aspects. It is essential for synaptic plasticity and a range of behavioural learning situations.