کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
532054 | 869898 | 2015 | 8 صفحه PDF | دانلود رایگان |
• A unifying optimization problem formulated for semi-supervised subspace learning.
• Nuclear-norm regularized optimization tackled by efficient inf-dim greedy search.
• Nonlinear kernel extension introduced with no extra computational complexity.
• Superior performance than existing methods on several interesting datasets.
Subspace estimation is of paramount importance in dealing with high-dimensional data with noise. In this paper we consider a semi-supervised learning setup where certain supervised information (e.g., class labels) is available for only a part of data samples. First we formulate a unifying optimization problem that subsumes the well-known principal component analysis in unsupervised scenarios as a special case, while exploiting labeled data effectively. To circumvent difficult matrix rank constraints in the original problem, we propose a nuclear norm based relaxation that ends up with convex optimization. We then provide an infinite-dimensional greedy search algorithm that solves the optimization problem efficiently. An extension to nonlinear dimensionality reduction is also introduced, which is as efficient as the linear model via dual representation with kernel trick. The effectiveness of the proposed approach is demonstrated experimentally on several semi-supervised learning problems.
Journal: Pattern Recognition - Volume 48, Issue 4, April 2015, Pages 1563–1570