Article ID Journal Published Year Pages File Type
5778239 Journal of Applied Logic 2017 51 Pages PDF
Abstract
In the domain of decision theoretic planning, the factored framework (Factored Markov Decision Process, fmdp) has produced optimized algorithms using structured representations such as Decision Trees (Structured Value Iteration (svi), Structured Policy Iteration (spi)) or Algebraic Decision Diagrams (Stochastic Planning Using Decision Diagrams (spudd)). Since it may be difficult to elaborate the factored models used by these algorithms, the architecture sdyna, which combines learning and planning algorithms using structured representations, was introduced. However, the state-of-the-art algorithms for incremental learning, for structured decision theoretic planning or for reinforcement learning require the problem to be specified only with binary variables and/or use data structures that can be improved in term of compactness. In this paper, we propose to use Multi-Valued Decision Diagrams (mdds) as a more efficient data structure for the sdyna architecture and describe a planning algorithm and an incremental learning algorithm dedicated to this new structured representation. For both planning and learning algorithms, we experimentally show that they allow significant improvements in time, in compactness of the computed policy and of the learned model. We then analyzed the combination of these two algorithms in an efficient sdyna instance for simultaneous learning and planning using mdds.
Related Topics
Physical Sciences and Engineering Mathematics Logic
Authors
, ,