Abstract
The work deals with a class of discrete-time zero-sum Markov games under a discounted optimality criterion with random state-action-dependent discount factors of the form (Formula presented.), where xn,an,bn, and ξn+1 are the state, the actions of players, and a random disturbance at time n, respectively, taking values in Borel spaces. The one-stage payoff is assumed to be possibly unbounded. In addition, the process {ξn} is formed by observable, independent, and identically distributed random variables with common distribution θ, which is unknown to players. By using the empirical distribution to estimate θ, we introduce a procedure to approximate the value V∗ of the game; such a procedure yields construction schemes of stationary optimal strategies and asymptotically optimal Markov strategies.
Original language | English |
---|---|
Pages (from-to) | 166-177 |
Number of pages | 12 |
Journal | Asian Journal of Control |
Volume | 23 |
Issue number | 1 |
DOIs | |
State | Published - 1 Jan 2019 |
Bibliographical note
Publisher Copyright:© 2019 Chinese Automatic Control Society and John Wiley & Sons Australia, Ltd
Keywords
- discounted optimality
- empirical estimation
- markov games
- non-constant discount factors