Cargando…
Transfer of conflict and cooperation from experienced games to new games: a connectionist model of learning
The question of whether, and if so how, learning can be transfered from previously experienced games to novel games has recently attracted the attention of the experimental game theory literature. Existing research presumes that learning operates over actions, beliefs or decision rules. This study i...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4379898/ https://www.ncbi.nlm.nih.gov/pubmed/25873855 http://dx.doi.org/10.3389/fnins.2015.00102 |
Sumario: | The question of whether, and if so how, learning can be transfered from previously experienced games to novel games has recently attracted the attention of the experimental game theory literature. Existing research presumes that learning operates over actions, beliefs or decision rules. This study instead uses a connectionist approach that learns a direct mapping from game payoffs to a probability distribution over own actions. Learning is operationalized as a backpropagation rule that adjusts the weights of feedforward neural networks in the direction of increasing the probability of an agent playing a myopic best response to the last game played. One advantage of this approach is that it expands the scope of the model to any possible n × n normal-form game allowing for a comprehensive model of transfer of learning. Agents are exposed to games drawn from one of seven classes of games with significantly different strategic characteristics and then forced to play games from previously unseen classes. I find significant transfer of learning, i.e., behavior that is path-dependent, or conditional on the previously seen games. Cooperation is more pronounced in new games when agents are previously exposed to games where the incentive to cooperate is stronger than the incentive to compete, i.e., when individual incentives are aligned. Prior exposure to Prisoner's dilemma, zero-sum and discoordination games led to a significant decrease in realized payoffs for all the game classes under investigation. A distinction is made between superficial and deep transfer of learning both—the former is driven by superficial payoff similarities between games, the latter by differences in the incentive structures or strategic implications of the games. I examine whether agents learn to play the Nash equilibria of games, how they select amongst multiple equilibria, and whether they transfer Nash equilibrium behavior to unseen games. Sufficient exposure to a strategically heterogeneous set of games is found to be a necessary condition for deep learning (and transfer) across game classes. Paradoxically, superficial transfer of learning is shown to lead to better outcomes than deep transfer for a wide range of game classes. The simulation results corroborate important experimental findings with human subjects, and make several novel predictions that can be tested experimentally. |
---|