Article ID Journal Published Year Pages File Type
4608525 Journal of Complexity 2016 25 Pages PDF
Abstract

In this paper we establish the error estimates for multi-penalty regularization under the general smoothness assumption in the context of learning theory. One of the motivation for this work is to study the convergence analysis of two-parameter regularization theoretically in the manifold learning setting. In this spirit, we obtain the error bounds for the manifold learning problem using more general framework of multi-penalty regularization. We propose a new parameter choice rule “the balanced-discrepancy principle” and analyze the convergence of the scheme with the help of estimated error bounds. We show that multi-penalty regularization with the proposed parameter choice exhibits the convergence rates similar to single-penalty regularization. Finally on a series of test samples we demonstrate the superiority of multi-parameter regularization over single-penalty regularization.

Related Topics
Physical Sciences and Engineering Mathematics Analysis
Authors
, ,