Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6892738 | Computers & Operations Research | 2017 | 12 Pages |
Abstract
The computation of ϵ-optimal policies for continuous time Markov decision processes (CTMDPs) over finite time intervals is a sophisticated problem because the optimal policy may change at arbitrary times. Numerical algorithms based on time discretization or uniformization have been proposed for the computation of optimal policies. The uniformization based algorithm has shown to be more reliable and often also more efficient but is currently only available for processes where the gain or reward does not depend on the decision taken in a state. In this paper, we present two new uniformization based algorithms for computing ϵ-optimal policies for CTMDPs with decision dependent rewards over a finite time horizon. Due to a new and tighter upper bound the newly proposed algorithms cannot only be applied for decision dependent rewards, they also outperform the available approach for rewards that do not depend on the decision. In particular for models where the policy only rarely changes, optimal policies can be computed much faster.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Science (General)
Authors
Peter Buchholz, Iryna Dohndorf, Dimitri Scheftelowitsch,