Markov games with unknown random state-actions-dependent discount factors: Empirical estimation

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

1 Cita (Scopus)


The work deals with a class of discrete-time zero-sum Markov games under a discounted optimality criterion with random state-action-dependent discount factors of the form (Formula presented.), where xn,an,bn, and ξn+1 are the state, the actions of players, and a random disturbance at time n, respectively, taking values in Borel spaces. The one-stage payoff is assumed to be possibly unbounded. In addition, the process {ξn} is formed by observable, independent, and identically distributed random variables with common distribution θ, which is unknown to players. By using the empirical distribution to estimate θ, we introduce a procedure to approximate the value V of the game; such a procedure yields construction schemes of stationary optimal strategies and asymptotically optimal Markov strategies.

Idioma originalInglés
Páginas (desde-hasta)166-177
Número de páginas12
PublicaciónAsian Journal of Control
EstadoPublicada - 1 ene. 2019

Nota bibliográfica

Publisher Copyright:
© 2019 Chinese Automatic Control Society and John Wiley & Sons Australia, Ltd


Profundice en los temas de investigación de 'Markov games with unknown random state-actions-dependent discount factors: Empirical estimation'. En conjunto forman una huella única.

Citar esto