کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
404541 677434 2009 12 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Adaptive importance sampling for value function approximation in off-policy reinforcement learning
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
پیش نمایش صفحه اول مقاله
Adaptive importance sampling for value function approximation in off-policy reinforcement learning
چکیده انگلیسی

Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a policy that is different from the currently optimized policy. A common approach is to use importance sampling techniques for compensating for the bias of value function estimators caused by the difference between the data-sampling policy and the target policy. However, existing off-policy methods often do not take the variance of the value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neural Networks - Volume 22, Issue 10, December 2009, Pages 1399–1410
نویسندگان
, , , ,