Article ID Journal Published Year Pages File Type
470707 Computers & Mathematics with Applications 2010 10 Pages PDF
Abstract

We follow a learning theory viewpoint to study a family of learning schemes for regression related to positive linear operators in approximation theory. Such a learning scheme is generated from a random sample by a kernel function parameterized by a scaling parameter. The essential difference between this algorithm and the classical approximation schemes is the randomness of the sampling points, which breaks the condition of good distribution of sampling points often required in approximation theory. We investigate the efficiency of the learning algorithm in a regression setting and present learning rates stated in terms of the smoothness of the regression function, sizes of variances, and distances of kernel centers from regular grids. The error analysis is conducted by estimating the sample error and the approximation error. Two examples with kernel functions related to continuous Bernstein bases and Jackson kernels are studied in detail and concrete learning rates are obtained.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science (General)
Authors
, ,