Article ID Journal Published Year Pages File Type
4948344 Neurocomputing 2016 10 Pages PDF
Abstract
In the past decade, we have witnessed a surge of interests of learning a low-dimensional subspace for dimension reduction (DR). However, facing with features from multiple views, most DR methods fail to integrate compatible and complementary information from multi-view features to construct low-dimensional subspace. Meanwhile, multi-view features always locate in different dimensional spaces which challenges multi-view subspace learning. Therefore, how to learn one common subspace which can exploit information from multi-view features is of vital importance but challenging. To address this issue, we propose a multi-view sparse subspace learning method called Multi-view Sparsity Preserving Projection (MvSPP) in this paper. MvSPP seeks to find a set of linear transforms to project multi-view features into one common low-dimensional subspace where multi-view sparse reconstructive weights are preserved as much as possible. Therefore, MvSPP can avoid incorrect sparse correlations which are caused by the global property of sparse representation from one single view. A co-regularization scheme is designed to integrate multi-view features to seek one common subspace which is consistent across multiple views. An iterative alternating strategy is presented to obtain the optimal solution of MvSPP. Various experiments on multi-view datasets show the excellent performance of this novel method.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,