Article ID Journal Published Year Pages File Type
8899393 Journal of Mathematical Analysis and Applications 2018 14 Pages PDF
Abstract
We study distributed regression learning with coefficient regularization scheme in a reproducing kernel Hilbert space (RKHS). The algorithm randomly partitions the sample set {zi}i=1N into m disjoint sample subsets of equal size, applies the coefficient regularization scheme to each sample subset to produce an output function, and averages the individual output functions to get the final global estimator. We deduce the error bound in expectation in the L2-metric and prove the asymptotic convergence for this distributed coefficient regularization learning. Satisfactory learning rates are then derived under a standard regularity condition on the regression function, which reveals an interesting phenomenon that when m≤Ns and s is small enough, this distributed learning has the same convergence rate compared with the algorithm processing the whole data in one single machine.
Related Topics
Physical Sciences and Engineering Mathematics Analysis
Authors
, ,