کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
531793 | 869876 | 2016 | 11 صفحه PDF | دانلود رایگان |
• Two non-linear semi-supervised embeddings are proposed.
• These methods elegantly integrate sparsity preserving and constrained embedding.
• The second framework provides a non-linear embedding and its out-of-sample extension.
• Classification performance after embedding is assessed on eight image datasets.
• KNN and SVM classifiers are used after getting the embedding.
• Experimental results on eight public image datasets show the outperformance of the methods.
In this paper, two semi-supervised embedding methods are proposed, namely Constrained Sparsity Preserving Embedding (CSPE) and Flexible Constrained Sparsity Preserving Embedding (FCSPE). CSPE is a semi-supervised embedding method which can be considered as a semi-supervised extension of Sparsity Preserving Projections (SPP) integrated with the idea of in-class constraints. Both the labeled and unlabeled data can be utilized within the CSPE framework. However, CSPE does not have an out-of-sample extension since the projection of the unseen samples cannot be obtained directly. In order to have an inductive semi-supervised learning, i.e. being able to handle unseen samples, we propose FCSPE which can simultaneously provide a non-linear embedding and an approximate linear projection in one regression function. FCSPE simultaneously achieves the following: (i) the local sparse structures is preserved, (ii) the data samples with a same label are mapped onto one point in the projection space, and (iii) a linear projection that is the closest one to the non-linear embedding is estimated. Experimental results on eight public image data sets demonstrate the effectiveness of the proposed methods as well as their superiority to many competitive semi-supervised embedding techniques.
Journal: Pattern Recognition - Volume 60, December 2016, Pages 813–823