Adaptive control for discrete-time Markov processes with unbounded costs: Average criterion

Evgueni I. Gordienko*, J. Adolfo Minjárez-Sosa

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

The paper deals with a class of discrete-time Markov control processes with Borel state and action spaces, and possibly unbounded one-stage costs. The processes are given by recurrent equations xt+1 = F(xt,at, ξt), t = 1,2, . . . with i.i.d. ξκ - valued random vectors ξt whose density p is unknown. Assuming observability of ξt and taking advantage of the procedure of statistical estimation of p used in a previous work by authors, we construct an average cost optimal adaptive policy.

Original languageEnglish
Pages (from-to)37-55
Number of pages19
JournalMathematical Methods of Operations Research
Volume48
Issue number1
DOIs
StatePublished - 1998

Keywords

  • Adaptive policy
  • Average cost criterion
  • Markov control process
  • Projection of estimator
  • Rate of convergence

Fingerprint

Dive into the research topics of 'Adaptive control for discrete-time Markov processes with unbounded costs: Average criterion'. Together they form a unique fingerprint.

Cite this