کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
326321 542230 2016 6 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Empirical priors for reinforcement learning models
ترجمه فارسی عنوان
اولویت های تجربی برای مدل های یادگیری تقویتی
کلمات کلیدی
موضوعات مرتبط
مهندسی و علوم پایه ریاضیات ریاضیات کاربردی
چکیده انگلیسی


• Reinforcement learning models suffer from the difficulty of parameter estimation.
• Empirical priors improve predictive accuracy, reliability, identifiability, and detection of individual differences.
• These priors are fairly robust across model variants.

Computational models of reinforcement learning have played an important role in understanding learning and decision making behavior, as well as the neural mechanisms underlying these behaviors. However, fitting the parameters of these models can be challenging: the parameters are not identifiable, estimates are unreliable, and the fitted models may not have good predictive validity. Prior distributions on the parameters can help regularize estimates and to some extent deal with these challenges, but picking a good prior is itself challenging. This paper presents empirical priors for reinforcement learning models, showing that priors estimated from a relatively large dataset are more identifiable, more reliable, and have better predictive validity compared to model-fitting with uniform priors.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Journal of Mathematical Psychology - Volume 71, April 2016, Pages 1–6
نویسندگان
,