Article ID Journal Published Year Pages File Type
9952072 Engineering Applications of Artificial Intelligence 2018 8 Pages PDF
Abstract
This paper presents a global manifold margin learning approach for data feature extraction or dimensionality reduction, which is named locally linear representation manifold margin (LLRMM). Provided that points locating on one manifold are of the same class and those residing on the corresponding manifolds are varied labeled, LLRMM is desired to identify different manifolds, respectively. In the proposed LLRMM, it firstly constructs both a between-manifold graph and a within-manifold graph. In the between-manifold graph, for any point, its k nearest neighbors and itself must belong to different manifolds. However, any node and its neighborhood points should be on the same manifold in the within-manifold graph. Then we use the minimum locally linear representation trick to reconstruct any node with their corresponding k nearest neighbors in both graphs, from which a between-manifold graph scatter and a within-manifold graph scatter can be reasoned, followed by a novel global model of manifold margin. At last, a projection will be explored to map the original data into a low dimensional subspace with the maximum manifold margin. Experiments on some widely used face data sets including AR, CMU PIE, Yale, YaleB and LFW have been carried out, where the performance of the proposed LLRMM outperforms those of some other methods such as kernel principal component analysis (KPCA), non-parametric discriminant analysis (NDA), reconstructive discriminant analysis (RDA), discriminant multiple manifold learning (DMML) and large margin nearest neighbor (LMNN).
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,