Cargando…
Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task
Prominent computational models describe a neural mechanism for learning from reward prediction errors, and it has been suggested that variations in this mechanism are reflected in personality factors such as trait extraversion. However, although trait extraversion has been linked to improved reward...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3760140/ https://www.ncbi.nlm.nih.gov/pubmed/24027514 http://dx.doi.org/10.3389/fnhum.2013.00525 |
_version_ | 1782282739265830912 |
---|---|
author | Skatova, Anya Chan, Patricia A. Daw, Nathaniel D. |
author_facet | Skatova, Anya Chan, Patricia A. Daw, Nathaniel D. |
author_sort | Skatova, Anya |
collection | PubMed |
description | Prominent computational models describe a neural mechanism for learning from reward prediction errors, and it has been suggested that variations in this mechanism are reflected in personality factors such as trait extraversion. However, although trait extraversion has been linked to improved reward learning, it is not yet known whether this relationship is selective for the particular computational strategy associated with error-driven learning, known as model-free reinforcement learning, vs. another strategy, model-based learning, which the brain is also known to employ. In the present study we test this relationship by examining whether humans' scores on an extraversion scale predict individual differences in the balance between model-based and model-free learning strategies in a sequentially structured decision task designed to distinguish between them. In previous studies with this task, participants have shown a combination of both types of learning, but with substantial individual variation in the balance between them. In the current study, extraversion predicted worse behavior across both sorts of learning. However, the hypothesis that extraverts would be selectively better at model-free reinforcement learning held up among a subset of the more engaged participants, and overall, higher task engagement was associated with a more selective pattern by which extraversion predicted better model-free learning. The findings indicate a relationship between a broad personality orientation and detailed computational learning mechanisms. Results like those in the present study suggest an intriguing and rich relationship between core neuro-computational mechanisms and broader life orientations and outcomes. |
format | Online Article Text |
id | pubmed-3760140 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2013 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-37601402013-09-11 Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task Skatova, Anya Chan, Patricia A. Daw, Nathaniel D. Front Hum Neurosci Neuroscience Prominent computational models describe a neural mechanism for learning from reward prediction errors, and it has been suggested that variations in this mechanism are reflected in personality factors such as trait extraversion. However, although trait extraversion has been linked to improved reward learning, it is not yet known whether this relationship is selective for the particular computational strategy associated with error-driven learning, known as model-free reinforcement learning, vs. another strategy, model-based learning, which the brain is also known to employ. In the present study we test this relationship by examining whether humans' scores on an extraversion scale predict individual differences in the balance between model-based and model-free learning strategies in a sequentially structured decision task designed to distinguish between them. In previous studies with this task, participants have shown a combination of both types of learning, but with substantial individual variation in the balance between them. In the current study, extraversion predicted worse behavior across both sorts of learning. However, the hypothesis that extraverts would be selectively better at model-free reinforcement learning held up among a subset of the more engaged participants, and overall, higher task engagement was associated with a more selective pattern by which extraversion predicted better model-free learning. The findings indicate a relationship between a broad personality orientation and detailed computational learning mechanisms. Results like those in the present study suggest an intriguing and rich relationship between core neuro-computational mechanisms and broader life orientations and outcomes. Frontiers Media S.A. 2013-09-03 /pmc/articles/PMC3760140/ /pubmed/24027514 http://dx.doi.org/10.3389/fnhum.2013.00525 Text en Copyright © 2013 Skatova, Chan and Daw. http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Skatova, Anya Chan, Patricia A. Daw, Nathaniel D. Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
title | Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
title_full | Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
title_fullStr | Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
title_full_unstemmed | Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
title_short | Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
title_sort | extraversion differentiates between model-based and model-free strategies in a reinforcement learning task |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3760140/ https://www.ncbi.nlm.nih.gov/pubmed/24027514 http://dx.doi.org/10.3389/fnhum.2013.00525 |
work_keys_str_mv | AT skatovaanya extraversiondifferentiatesbetweenmodelbasedandmodelfreestrategiesinareinforcementlearningtask AT chanpatriciaa extraversiondifferentiatesbetweenmodelbasedandmodelfreestrategiesinareinforcementlearningtask AT dawnathanield extraversiondifferentiatesbetweenmodelbasedandmodelfreestrategiesinareinforcementlearningtask |