Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
1145581 | Journal of Multivariate Analysis | 2014 | 16 Pages |
We present a new oracle inequality for generic regularized empirical risk minimization algorithms learning from stationary αα-mixing processes. Our main tool to derive this inequality is a rather involved version of the so-called peeling method. We then use this oracle inequality to derive learning rates for some learning methods such as empirical risk minimization (ERM), least squares support vector machines (SVMs) using given generic kernels, and SVMs using the Gaussian RBF kernels for both least squares and quantile regression. It turns out that for i.i.d. processes our learning rates for ERM and SVMs with Gaussian kernels match, up to some arbitrarily small extra term in the exponent, the optimal rates, while in the remaining cases our rates are at least close to the optimal rates.