Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
405585 | Neural Networks | 2010 | 10 Pages |
Abstract
Appropriately designing sampling policies is highly important for obtaining better control policies in reinforcement learning. In this paper, we first show that the least-squares policy iteration (LSPI) framework allows us to employ statistical active learning methods for linear regression. Then we propose a design method of good sampling policies for efficient exploration, which is particularly useful when the sampling cost of immediate rewards is high. The effectiveness of the proposed method, which we call active policy iteration (API), is demonstrated through simulations with a batting robot.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Takayuki Akiyama, Hirotaka Hachiya, Masashi Sugiyama,