Article ID Journal Published Year Pages File Type
410385 Neurocomputing 2010 9 Pages PDF
Abstract

Linear discriminant analysis (LDA) is a well-known scheme for supervised subspace learning. It has been widely used in the applications of computer vision and pattern recognition. However, an intrinsic limitation of LDA is the sensitivity to the presence of outliers, due to using the Frobenius norm to measure the inter-class and intra-class distances. In this paper, we propose a novel rotational invariant L1 norm (i.e., R1 norm) based discriminant criterion (referred to as DCL1), which better characterizes the intra-class compactness and the inter-class separability by using the rotational invariant L1 norm instead of the Frobenius norm. Based on the DCL1, three subspace learning algorithms (i.e., 1DL1, 2DL1, and TDL1) are developed for vector-based, matrix-based, and tensor-based representations of data, respectively. They are capable of reducing the influence of outliers substantially, resulting in a robust classification. Theoretical analysis and experimental evaluations demonstrate the promise and effectiveness of the proposed DCL1 and its algorithms.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,