Cargando…

Zero-sum discrete-time Markov games with unknown disturbance distribution: discounted and average criteria

This SpringerBrief deals with a class of discrete-time zero-sum Markov games with Borel state and action spaces, and possibly unbounded payoffs, under discounted and average criteria, whose state process evolves according to a stochastic difference equation. The corresponding disturbance process is...

Descripción completa

Detalles Bibliográficos
Autor principal: Minjárez-Sosa, J Adolfo
Lenguaje:eng
Publicado: Springer 2020
Materias:
Acceso en línea:https://dx.doi.org/10.1007/978-3-030-35720-7
http://cds.cern.ch/record/2708773
Descripción
Sumario:This SpringerBrief deals with a class of discrete-time zero-sum Markov games with Borel state and action spaces, and possibly unbounded payoffs, under discounted and average criteria, whose state process evolves according to a stochastic difference equation. The corresponding disturbance process is an observable sequence of independent and identically distributed random variables with unknown distribution for both players. Unlike the standard case, the game is played over an infinite horizon evolving as follows. At each stage, once the players have observed the state of the game, and before choosing the actions, players 1 and 2 implement a statistical estimation process to obtain estimates of the unknown distribution. Then, independently, the players adapt their decisions to such estimators to select their actions and construct their strategies. This book presents a systematic analysis on recent developments in this kind of games. Specifically, the theoretical foundations on the procedures combining statistical estimation and control techniques for the construction of strategies of the players are introduced, with illustrative examples. In this sense, the book is an essential reference for theoretical and applied researchers in the fields of stochastic control and game theory, and their applications.