Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4607434 | Journal of Approximation Theory | 2013 | 19 Pages |
Abstract
Regularization schemes with an ℓ1ℓ1-regularizer often produce sparse representations for objects in approximation theory, image processing, statistics and learning theory. In this paper, we study a kernel-based learning algorithm for regression generated by regularization schemes associated with the ℓ1ℓ1-regularizer. We show that convergence rates of the learning algorithm can be independent of the dimension of the input space of the regression problem when the kernel is smooth enough. This confirms the effectiveness of the learning algorithm. Our error analysis is carried out by means of an approximation theory approach using a local polynomial reproduction formula and the norming set condition.
Related Topics
Physical Sciences and Engineering
Mathematics
Analysis
Authors
Hong-Yan Wang, Quan-Wu Xiao, Ding-Xuan Zhou,