Article ID Journal Published Year Pages File Type
10326394 Neurocomputing 2016 34 Pages PDF
Abstract
Traditional non-negative matrix factorization (NMF) is an unsupervised method that represents non-negative data by a part-based dictionary and non-negative codes. Recently, the unsupervised NMF has been extended to discriminative ones for classification problems. However, these discriminative methods may become inefficient when outliers are presented in the data, e.g. mislabeled samples, because the outliers usually deviate from the normal samples in one class and would perturb the discriminative dictionary. In this paper, we propose a novel method, called robust discriminative non-negative matrix factorization (RDNMF), to reduce the effect of outliers and improve the discriminative strength. The RDNMF learns a non-negative dictionary for each class, and each dictionary contains two parts: a discriminative part and an outlier part. The discriminative parts are obtained by minimizing the cosine similarity between classes. The codes on the outlier part are required to be sparse so that most outliers can be modeled by this part, without large influence over the discriminative part.The final dictionary is obtained by concatenating the discriminative parts of all classes, and the non-negative codes for each sample, as well as test sample, are obtained by coding with this dictionary. Experimental comparisons with existing dictionary learning methods on MNIST, PIE, Yale B and ORL demonstrate the effectiveness and robustness of our approach.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,