کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
392655 665146 2014 12 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Approximate Bayesian recursive estimation
ترجمه فارسی عنوان
برآورد بازگشتی تقریبی بیزی
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
چکیده انگلیسی

Bayesian learning provides a firm theoretical basis of the design and exploitation of algorithms in data-streams processing (preprocessing, change detection, hypothesis testing, clustering, etc.). Primarily, it relies on a recursive parameter estimation of a firmly bounded complexity. As a rule, it has to approximate the exact posterior probability density (pd), which comprises unreduced information about the estimated parameter. In the recursive treatment of the data stream, the latest approximate pd is usually updated using the treated parametric model and the newest data and then approximated. The fact that approximation errors may accumulate over time course is mostly neglected in the estimator design and, at most, checked ex post. The paper inspects the estimator design with respect to the error accumulation and concludes that a sort of forgetting (pd flattening) is an indispensable part of a reliable approximate recursive estimation. The conclusion results from a Bayesian problem formulation complemented by the minimum Kullback–Leibler divergence principle. Claims of the paper are supported by a straightforward analysis, by elaboration of the proposed estimator to widely applicable parametric models and illustrated numerically.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Information Sciences - Volume 285, 20 November 2014, Pages 100–111
نویسندگان
,