Cargando…
Nash equilibria in human sensorimotor interactions explained by Q-learning with intrinsic costs
The Nash equilibrium concept has previously been shown to be an important tool to understand human sensorimotor interactions, where different actors vie for minimizing their respective effort while engaging in a multi-agent motor task. However, it is not clear how such equilibria are reached. Here,...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8531365/ https://www.ncbi.nlm.nih.gov/pubmed/34675336 http://dx.doi.org/10.1038/s41598-021-99428-0 |
_version_ | 1784586840343314432 |
---|---|
author | Lindig-León, Cecilia Schmid, Gerrit Braun, Daniel A. |
author_facet | Lindig-León, Cecilia Schmid, Gerrit Braun, Daniel A. |
author_sort | Lindig-León, Cecilia |
collection | PubMed |
description | The Nash equilibrium concept has previously been shown to be an important tool to understand human sensorimotor interactions, where different actors vie for minimizing their respective effort while engaging in a multi-agent motor task. However, it is not clear how such equilibria are reached. Here, we compare different reinforcement learning models to human behavior engaged in sensorimotor interactions with haptic feedback based on three classic games, including the prisoner’s dilemma, and the symmetric and asymmetric matching pennies games. We find that a discrete analysis that reduces the continuous sensorimotor interaction to binary choices as in classical matrix games does not allow to distinguish between the different learning algorithms, but that a more detailed continuous analysis with continuous formulations of the learning algorithms and the game-theoretic solutions affords different predictions. In particular, we find that Q-learning with intrinsic costs that disfavor deviations from average behavior explains the observed data best, even though all learning algorithms equally converge to admissible Nash equilibrium solutions. We therefore conclude that it is important to study different learning algorithms for understanding sensorimotor interactions, as such behavior cannot be inferred from a game-theoretic analysis alone, that simply focuses on the Nash equilibrium concept, as different learning algorithms impose preferences on the set of possible equilibrium solutions due to the inherent learning dynamics. |
format | Online Article Text |
id | pubmed-8531365 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-85313652021-10-25 Nash equilibria in human sensorimotor interactions explained by Q-learning with intrinsic costs Lindig-León, Cecilia Schmid, Gerrit Braun, Daniel A. Sci Rep Article The Nash equilibrium concept has previously been shown to be an important tool to understand human sensorimotor interactions, where different actors vie for minimizing their respective effort while engaging in a multi-agent motor task. However, it is not clear how such equilibria are reached. Here, we compare different reinforcement learning models to human behavior engaged in sensorimotor interactions with haptic feedback based on three classic games, including the prisoner’s dilemma, and the symmetric and asymmetric matching pennies games. We find that a discrete analysis that reduces the continuous sensorimotor interaction to binary choices as in classical matrix games does not allow to distinguish between the different learning algorithms, but that a more detailed continuous analysis with continuous formulations of the learning algorithms and the game-theoretic solutions affords different predictions. In particular, we find that Q-learning with intrinsic costs that disfavor deviations from average behavior explains the observed data best, even though all learning algorithms equally converge to admissible Nash equilibrium solutions. We therefore conclude that it is important to study different learning algorithms for understanding sensorimotor interactions, as such behavior cannot be inferred from a game-theoretic analysis alone, that simply focuses on the Nash equilibrium concept, as different learning algorithms impose preferences on the set of possible equilibrium solutions due to the inherent learning dynamics. Nature Publishing Group UK 2021-10-21 /pmc/articles/PMC8531365/ /pubmed/34675336 http://dx.doi.org/10.1038/s41598-021-99428-0 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Lindig-León, Cecilia Schmid, Gerrit Braun, Daniel A. Nash equilibria in human sensorimotor interactions explained by Q-learning with intrinsic costs |
title | Nash equilibria in human sensorimotor interactions explained by Q-learning with intrinsic costs |
title_full | Nash equilibria in human sensorimotor interactions explained by Q-learning with intrinsic costs |
title_fullStr | Nash equilibria in human sensorimotor interactions explained by Q-learning with intrinsic costs |
title_full_unstemmed | Nash equilibria in human sensorimotor interactions explained by Q-learning with intrinsic costs |
title_short | Nash equilibria in human sensorimotor interactions explained by Q-learning with intrinsic costs |
title_sort | nash equilibria in human sensorimotor interactions explained by q-learning with intrinsic costs |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8531365/ https://www.ncbi.nlm.nih.gov/pubmed/34675336 http://dx.doi.org/10.1038/s41598-021-99428-0 |
work_keys_str_mv | AT lindigleoncecilia nashequilibriainhumansensorimotorinteractionsexplainedbyqlearningwithintrinsiccosts AT schmidgerrit nashequilibriainhumansensorimotorinteractionsexplainedbyqlearningwithintrinsiccosts AT braundaniela nashequilibriainhumansensorimotorinteractionsexplainedbyqlearningwithintrinsiccosts |