Article ID Journal Published Year Pages File Type
532054 Pattern Recognition 2015 8 Pages PDF
Abstract

•A unifying optimization problem formulated for semi-supervised subspace learning.•Nuclear-norm regularized optimization tackled by efficient inf-dim greedy search.•Nonlinear kernel extension introduced with no extra computational complexity.•Superior performance than existing methods on several interesting datasets.

Subspace estimation is of paramount importance in dealing with high-dimensional data with noise. In this paper we consider a semi-supervised learning setup where certain supervised information (e.g., class labels) is available for only a part of data samples. First we formulate a unifying optimization problem that subsumes the well-known principal component analysis in unsupervised scenarios as a special case, while exploiting labeled data effectively. To circumvent difficult matrix rank constraints in the original problem, we propose a nuclear norm based relaxation that ends up with convex optimization. We then provide an infinite-dimensional greedy search algorithm that solves the optimization problem efficiently. An extension to nonlinear dimensionality reduction is also introduced, which is as efficient as the linear model via dual representation with kernel trick. The effectiveness of the proposed approach is demonstrated experimentally on several semi-supervised learning problems.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
,