Article ID Journal Published Year Pages File Type
475740 Computers & Operations Research 2011 9 Pages PDF
Abstract

Continuous time Markov decision processes (CTMDPs) with a finite state and action space have been considered for a long time. It is known that under fairly general conditions the reward gained over a finite horizon can be maximized by a so-called piecewise constant policy which changes only finitely often in a finite interval. Although this result is available for more than 30 years, numerical analysis approaches to compute the optimal policy and reward are restricted to discretization methods which are known to converge to the true solution if the discretization step goes to zero. In this paper, we present a new method that is based on uniformization of the CTMDP and allows one to compute an ε-optimalε-optimal policy up to a predefined precision in a numerically stable way using adaptive time steps.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science (General)
Authors
, ,