Cargando…
Model-based prioritization for acquiring protection
Protection often involves the capacity to prospectively plan the actions needed to mitigate harm. The computational architecture of decisions involving protection remains unclear, as well as whether these decisions differ from other beneficial prospective actions such as reward acquisition. Here we...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9810162/ https://www.ncbi.nlm.nih.gov/pubmed/36534704 http://dx.doi.org/10.1371/journal.pcbi.1010805 |
_version_ | 1784863252402929664 |
---|---|
author | Tashjian, Sarah M. Wise, Toby Mobbs, Dean |
author_facet | Tashjian, Sarah M. Wise, Toby Mobbs, Dean |
author_sort | Tashjian, Sarah M. |
collection | PubMed |
description | Protection often involves the capacity to prospectively plan the actions needed to mitigate harm. The computational architecture of decisions involving protection remains unclear, as well as whether these decisions differ from other beneficial prospective actions such as reward acquisition. Here we compare protection acquisition to reward acquisition and punishment avoidance to examine overlapping and distinct features across the three action types. Protection acquisition is positively valenced similar to reward. For both protection and reward, the more the actor gains, the more benefit. However, reward and protection occur in different contexts, with protection existing in aversive contexts. Punishment avoidance also occurs in aversive contexts, but differs from protection because punishment is negatively valenced and motivates avoidance. Across three independent studies (Total N = 600) we applied computational modeling to examine model-based reinforcement learning for protection, reward, and punishment in humans. Decisions motivated by acquiring protection evoked a higher degree of model-based control than acquiring reward or avoiding punishment, with no significant differences in learning rate. The context-valence asymmetry characteristic of protection increased deployment of flexible decision strategies, suggesting model-based control depends on the context in which outcomes are encountered as well as the valence of the outcome. |
format | Online Article Text |
id | pubmed-9810162 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-98101622023-01-04 Model-based prioritization for acquiring protection Tashjian, Sarah M. Wise, Toby Mobbs, Dean PLoS Comput Biol Research Article Protection often involves the capacity to prospectively plan the actions needed to mitigate harm. The computational architecture of decisions involving protection remains unclear, as well as whether these decisions differ from other beneficial prospective actions such as reward acquisition. Here we compare protection acquisition to reward acquisition and punishment avoidance to examine overlapping and distinct features across the three action types. Protection acquisition is positively valenced similar to reward. For both protection and reward, the more the actor gains, the more benefit. However, reward and protection occur in different contexts, with protection existing in aversive contexts. Punishment avoidance also occurs in aversive contexts, but differs from protection because punishment is negatively valenced and motivates avoidance. Across three independent studies (Total N = 600) we applied computational modeling to examine model-based reinforcement learning for protection, reward, and punishment in humans. Decisions motivated by acquiring protection evoked a higher degree of model-based control than acquiring reward or avoiding punishment, with no significant differences in learning rate. The context-valence asymmetry characteristic of protection increased deployment of flexible decision strategies, suggesting model-based control depends on the context in which outcomes are encountered as well as the valence of the outcome. Public Library of Science 2022-12-19 /pmc/articles/PMC9810162/ /pubmed/36534704 http://dx.doi.org/10.1371/journal.pcbi.1010805 Text en © 2022 Tashjian et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Tashjian, Sarah M. Wise, Toby Mobbs, Dean Model-based prioritization for acquiring protection |
title | Model-based prioritization for acquiring protection |
title_full | Model-based prioritization for acquiring protection |
title_fullStr | Model-based prioritization for acquiring protection |
title_full_unstemmed | Model-based prioritization for acquiring protection |
title_short | Model-based prioritization for acquiring protection |
title_sort | model-based prioritization for acquiring protection |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9810162/ https://www.ncbi.nlm.nih.gov/pubmed/36534704 http://dx.doi.org/10.1371/journal.pcbi.1010805 |
work_keys_str_mv | AT tashjiansarahm modelbasedprioritizationforacquiringprotection AT wisetoby modelbasedprioritizationforacquiringprotection AT mobbsdean modelbasedprioritizationforacquiringprotection |