Markov games with unknown random state-actions-dependent discount factors

Empirical estimation

Research output: Contribution to journalArticleResearchpeer-review

Abstract

The work deals with a class of discrete-time zero-sum Markov games under a discounted optimality criterion with random state-action-dependent discount factors of the form (Formula presented.), where xn,an,bn, and ξn+1 are the state, the actions of players, and a random disturbance at time n, respectively, taking values in Borel spaces. The one-stage payoff is assumed to be possibly unbounded. In addition, the process {ξn} is formed by observable, independent, and identically distributed random variables with common distribution θ, which is unknown to players. By using the empirical distribution to estimate θ, we introduce a procedure to approximate the value V of the game; such a procedure yields construction schemes of stationary optimal strategies and asymptotically optimal Markov strategies.

Original languageEnglish
JournalAsian Journal of Control
DOIs
StatePublished - 1 Jan 2019

Fingerprint

Random variables

Keywords

  • discounted optimality
  • empirical estimation
  • markov games
  • non-constant discount factors

Cite this

@article{88548c86c436469694734ee16902840c,
title = "Markov games with unknown random state-actions-dependent discount factors: Empirical estimation",
abstract = "The work deals with a class of discrete-time zero-sum Markov games under a discounted optimality criterion with random state-action-dependent discount factors of the form (Formula presented.), where xn,an,bn, and ξn+1 are the state, the actions of players, and a random disturbance at time n, respectively, taking values in Borel spaces. The one-stage payoff is assumed to be possibly unbounded. In addition, the process {ξn} is formed by observable, independent, and identically distributed random variables with common distribution θ, which is unknown to players. By using the empirical distribution to estimate θ, we introduce a procedure to approximate the value V∗ of the game; such a procedure yields construction schemes of stationary optimal strategies and asymptotically optimal Markov strategies.",
keywords = "discounted optimality, empirical estimation, markov games, non-constant discount factors",
author = "David Gonz{\'a}lez-S{\'a}nchez and Fernando Luque-V{\'a}squez and Minj{\'a}rez-Sosa, {Jesus Adolfo}",
year = "2019",
month = "1",
day = "1",
doi = "10.1002/asjc.2159",
language = "Ingl{\'e}s",
journal = "Asian Journal of Control",
issn = "1561-8625",

}

TY - JOUR

T1 - Markov games with unknown random state-actions-dependent discount factors

T2 - Empirical estimation

AU - González-Sánchez, David

AU - Luque-Vásquez, Fernando

AU - Minjárez-Sosa, Jesus Adolfo

PY - 2019/1/1

Y1 - 2019/1/1

N2 - The work deals with a class of discrete-time zero-sum Markov games under a discounted optimality criterion with random state-action-dependent discount factors of the form (Formula presented.), where xn,an,bn, and ξn+1 are the state, the actions of players, and a random disturbance at time n, respectively, taking values in Borel spaces. The one-stage payoff is assumed to be possibly unbounded. In addition, the process {ξn} is formed by observable, independent, and identically distributed random variables with common distribution θ, which is unknown to players. By using the empirical distribution to estimate θ, we introduce a procedure to approximate the value V∗ of the game; such a procedure yields construction schemes of stationary optimal strategies and asymptotically optimal Markov strategies.

AB - The work deals with a class of discrete-time zero-sum Markov games under a discounted optimality criterion with random state-action-dependent discount factors of the form (Formula presented.), where xn,an,bn, and ξn+1 are the state, the actions of players, and a random disturbance at time n, respectively, taking values in Borel spaces. The one-stage payoff is assumed to be possibly unbounded. In addition, the process {ξn} is formed by observable, independent, and identically distributed random variables with common distribution θ, which is unknown to players. By using the empirical distribution to estimate θ, we introduce a procedure to approximate the value V∗ of the game; such a procedure yields construction schemes of stationary optimal strategies and asymptotically optimal Markov strategies.

KW - discounted optimality

KW - empirical estimation

KW - markov games

KW - non-constant discount factors

UR - http://www.scopus.com/inward/record.url?scp=85070753459&partnerID=8YFLogxK

U2 - 10.1002/asjc.2159

DO - 10.1002/asjc.2159

M3 - Artículo

JO - Asian Journal of Control

JF - Asian Journal of Control

SN - 1561-8625

ER -