کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
1149918 | 957903 | 2008 | 10 صفحه PDF | دانلود رایگان |

The study of regularized learning algorithms associated with least squared loss is one of very important issues. Wu et al. [2006. Learning rates of least-square regularized regression. Found. Comput. Math. 6, 171–192] established fast learning rates m-θm-θ for the least square regularized regression in reproducing kernel Hilbert spaces under some assumptions on Mercer kernels and on regression functions, where m denoted the number of the samples and θθ may be arbitrarily close to 1. They assumed as in most existing works that the set of samples were drawn independently from the underlying probability. However, independence is a very restrictive concept. Without the independence of samples, the study of learning algorithms is more involved, and little progress has been made. The aim of this paper is to establish the above results of Wu et al. for the dependent samples. The dependence of samples in this paper is expressed in terms of exponentially strongly mixing sequence.
Journal: Journal of Statistical Planning and Inference - Volume 138, Issue 7, 1 July 2008, Pages 2180–2189