Some advances on constrained Markov decision processes in Borel spaces with random state-dependent discount factors

Héctor Jasso-Fuentes, Raquiel R. López-Martínez, J. Adolfo Minjárez-Sosa*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

This paper addresses a class of discrete-time Markov decision processes in Borel spaces with a finite number of cost constraints. The constrained control model considers costs of discounted type with state-dependent discount factors which are subject to external disturbances. Our objective is to prove the existence of optimal control policies and characterize them according to certain optimality criteria. Specifically, by rewriting appropriately our original constrained problem as a new one on a space of occupation measures, we apply the direct method to show solvability. Next, the problem is defined as a convex program, and we prove that the existence of a saddle point of the associated Lagrangian operator is equivalent to the existence of an optimal control policy for the constrained problem. Finally, we turn our attention to multi-objective optimization problems, where the existence of Pareto optimal policies can be obtained from the existence of saddle-points of the aforementioned Lagrangian or equivalently from the existence of optimal control policies of constrained problems.

Original languageEnglish
Pages (from-to)925-951
Number of pages27
JournalOptimization
Volume73
Issue number4
DOIs
StatePublished - 2024

Bibliographical note

Publisher Copyright:
© 2022 Informa UK Limited, trading as Taylor & Francis Group.

Keywords

  • 90C40
  • 93E20
  • Markov decision processes
  • Pareto optimality
  • constrained control problems
  • convex programming
  • random non-constant discount factor

Fingerprint

Dive into the research topics of 'Some advances on constrained Markov decision processes in Borel spaces with random state-dependent discount factors'. Together they form a unique fingerprint.

Cite this