کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
6854261 1437409 2018 10 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Policy derivation methods for critic-only reinforcement learning in continuous spaces
ترجمه فارسی عنوان
روش های مشتق خط مشی برای یادگیری تقویت کننده منتقد در فضاهای مداوم
کلمات کلیدی
تقویت یادگیری، اقدامات مداوم، سیستم های چند متغیره، کنترل بهینه، مفهوم سیاست، بهینه سازی،
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
چکیده انگلیسی
This paper addresses the problem of deriving a policy from the value function in the context of critic-only reinforcement learning (RL) in continuous state and action spaces. With continuous-valued states, RL algorithms have to rely on a numerical approximator to represent the value function. Numerical approximation due to its nature virtually always exhibits artifacts which damage the overall performance of the controlled system. In addition, when continuous-valued action is used, the most common approach is to discretize the action space and exhaustively search for the action that maximizes the right-hand side of the Bellman equation. Such a policy derivation procedure is computationally involved and results in steady-state error due to the lack of continuity. In this work, we propose policy derivation methods which alleviate the above problems by means of action space refinement, continuous approximation, and post-processing of the V-function by using symbolic regression. The proposed methods are tested on nonlinear control problems: 1-DOF and 2-DOF pendulum swing-up problems, and on magnetic manipulation. The results show significantly improved performance in terms of cumulative return and computational complexity.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Engineering Applications of Artificial Intelligence - Volume 69, March 2018, Pages 178-187
نویسندگان
, , ,