Cargando…
Deeply Felt Affect: The Emergence of Valence in Deep Active Inference
The positive-negative axis of emotional valence has long been recognized as fundamental to adaptive behavior, but its origin and underlying function have largely eluded formal theorizing and computational modeling. Using deep active inference, a hierarchical inference scheme that rests on inverting...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MIT Press
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8594962/ https://www.ncbi.nlm.nih.gov/pubmed/33253028 http://dx.doi.org/10.1162/neco_a_01341 |
_version_ | 1784600091853586432 |
---|---|
author | Hesp, Casper Smith, Ryan Parr, Thomas Allen, Micah Friston, Karl J. Ramstead, Maxwell J. D. |
author_facet | Hesp, Casper Smith, Ryan Parr, Thomas Allen, Micah Friston, Karl J. Ramstead, Maxwell J. D. |
author_sort | Hesp, Casper |
collection | PubMed |
description | The positive-negative axis of emotional valence has long been recognized as fundamental to adaptive behavior, but its origin and underlying function have largely eluded formal theorizing and computational modeling. Using deep active inference, a hierarchical inference scheme that rests on inverting a model of how sensory data are generated, we develop a principled Bayesian model of emotional valence. This formulation asserts that agents infer their valence state based on the expected precision of their action model—an internal estimate of overall model fitness (“subjective fitness”). This index of subjective fitness can be estimated within any environment and exploits the domain generality of second-order beliefs (beliefs about beliefs). We show how maintaining internal valence representations allows the ensuing affective agent to optimize confidence in action selection preemptively. Valence representations can in turn be optimized by leveraging the (Bayes-optimal) updating term for subjective fitness, which we label affective charge (AC). AC tracks changes in fitness estimates and lends a sign to otherwise unsigned divergences between predictions and outcomes. We simulate the resulting affective inference by subjecting an in silico affective agent to a T-maze paradigm requiring context learning, followed by context reversal. This formulation of affective inference offers a principled account of the link between affect, (mental) action, and implicit metacognition. It characterizes how a deep biological system can infer its affective state and reduce uncertainty about such inferences through internal action (i.e., top-down modulation of priors that underwrite confidence). Thus, we demonstrate the potential of active inference to provide a formal and computationally tractable account of affect. Our demonstration of the face validity and potential utility of this formulation represents the first step within a larger research program. Next, this model can be leveraged to test the hypothesized role of valence by fitting the model to behavioral and neuronal responses. |
format | Online Article Text |
id | pubmed-8594962 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MIT Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-85949622021-11-17 Deeply Felt Affect: The Emergence of Valence in Deep Active Inference Hesp, Casper Smith, Ryan Parr, Thomas Allen, Micah Friston, Karl J. Ramstead, Maxwell J. D. Neural Comput Research Article The positive-negative axis of emotional valence has long been recognized as fundamental to adaptive behavior, but its origin and underlying function have largely eluded formal theorizing and computational modeling. Using deep active inference, a hierarchical inference scheme that rests on inverting a model of how sensory data are generated, we develop a principled Bayesian model of emotional valence. This formulation asserts that agents infer their valence state based on the expected precision of their action model—an internal estimate of overall model fitness (“subjective fitness”). This index of subjective fitness can be estimated within any environment and exploits the domain generality of second-order beliefs (beliefs about beliefs). We show how maintaining internal valence representations allows the ensuing affective agent to optimize confidence in action selection preemptively. Valence representations can in turn be optimized by leveraging the (Bayes-optimal) updating term for subjective fitness, which we label affective charge (AC). AC tracks changes in fitness estimates and lends a sign to otherwise unsigned divergences between predictions and outcomes. We simulate the resulting affective inference by subjecting an in silico affective agent to a T-maze paradigm requiring context learning, followed by context reversal. This formulation of affective inference offers a principled account of the link between affect, (mental) action, and implicit metacognition. It characterizes how a deep biological system can infer its affective state and reduce uncertainty about such inferences through internal action (i.e., top-down modulation of priors that underwrite confidence). Thus, we demonstrate the potential of active inference to provide a formal and computationally tractable account of affect. Our demonstration of the face validity and potential utility of this formulation represents the first step within a larger research program. Next, this model can be leveraged to test the hypothesized role of valence by fitting the model to behavioral and neuronal responses. MIT Press 2021-02-01 /pmc/articles/PMC8594962/ /pubmed/33253028 http://dx.doi.org/10.1162/neco_a_01341 Text en © 2020 Massachusetts Institute of Technology.Published under a Creative Commons Attribution 4.0 International (CC BY 4.0) license. https://creativecommons.org/licenses/by-nc/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license, which permits copying and redistributing the material in any medium or format for noncommercial purposes only. For a full description of the license, please visit https://creativecommons.org/licenses/by-nc/4.0/legalcode (https://creativecommons.org/licenses/by-nc/4.0/) . |
spellingShingle | Research Article Hesp, Casper Smith, Ryan Parr, Thomas Allen, Micah Friston, Karl J. Ramstead, Maxwell J. D. Deeply Felt Affect: The Emergence of Valence in Deep Active Inference |
title | Deeply Felt Affect: The Emergence of Valence in Deep Active Inference |
title_full | Deeply Felt Affect: The Emergence of Valence in Deep Active Inference |
title_fullStr | Deeply Felt Affect: The Emergence of Valence in Deep Active Inference |
title_full_unstemmed | Deeply Felt Affect: The Emergence of Valence in Deep Active Inference |
title_short | Deeply Felt Affect: The Emergence of Valence in Deep Active Inference |
title_sort | deeply felt affect: the emergence of valence in deep active inference |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8594962/ https://www.ncbi.nlm.nih.gov/pubmed/33253028 http://dx.doi.org/10.1162/neco_a_01341 |
work_keys_str_mv | AT hespcasper deeplyfeltaffecttheemergenceofvalenceindeepactiveinference AT smithryan deeplyfeltaffecttheemergenceofvalenceindeepactiveinference AT parrthomas deeplyfeltaffecttheemergenceofvalenceindeepactiveinference AT allenmicah deeplyfeltaffecttheemergenceofvalenceindeepactiveinference AT fristonkarlj deeplyfeltaffecttheemergenceofvalenceindeepactiveinference AT ramsteadmaxwelljd deeplyfeltaffecttheemergenceofvalenceindeepactiveinference |