کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
532399 | 869947 | 2012 | 13 صفحه PDF | دانلود رایگان |

How to define sparse affinity weight matrices is still an open problem in existing manifold learning algorithms. In this paper, we propose a novel unsupervised learning method called Non-negative Sparseness Preserving Embedding (NSPE) for linear dimensionality reduction. Differing from the manifold learning-based subspace learning methods such as Locality Preserving Projections (LPP), Neighbor Preserving Embedding (NPE) and the recently proposed sparse representation based Sparsity Preserving Projections (SPP); NSPE preserves the non-negative sparse reconstruction relationships in low-dimensional subspace. Another novelty of NSPE is the sparseness constraint, which is directly added to control the non-negative sparse representation coefficients. This gives a more ground truth model to imitate the actions of the active neuron cells of V1 of the primate visual cortex on information processing. Although labels are not used in the training steps, the non-negative sparse representation can still discover the latent discriminant information and thus provides better measure coefficients and significant discriminant abilities for feature extraction. Moreover, NSPE is more efficient than the recently proposed sparse representation based SPP algorithm. Comprehensive comparison and extensive experiments show that NSPE has the competitive performance against the unsupervised learning algorithms such as classical PCA and the state-of-the-art techniques: LPP, NPE and SPP.
► A novel unsupervised learning method called Non-negative Sparseness Preserving Embedding (NSPE) is proposed.
► NSPE outperforms other unsupervised learning algorithms for linear dimensionality reduction.
► NSPE preserves the non-negative sparse reconstruction relationships in low-dimensional subspace.
► The sparseness constraint of NSPE is directly added to control the non-negative sparse representation (NSR) coefficients.
► Although labels are not used in the training steps, the NSR can still discover the latent discriminant information.
Journal: Pattern Recognition - Volume 45, Issue 4, April 2012, Pages 1511–1523