Article ID Journal Published Year Pages File Type
534605 Pattern Recognition Letters 2013 7 Pages PDF
Abstract

•We present approaches to include tangent vector information in LDA and SRDA while retaining their computational cost.•This provides a way of taking better advantage of the limited available data, possibly avoiding singularity problems.•In experimental results, the recognition performance either is unaffected or it improves.•The methods become more robust to the known transformations used during learning.

In the area of pattern recognition, it is common for few training samples to be available with respect to the dimensionality of the representation space; this is known as the curse of dimensionality. This problem can be alleviated by using a dimensionality reduction approach, which overcomes the curse relatively well. Moreover, supervised dimensionality reduction techniques generally provide better recognition performance; however, several of these tend to suffer from the curse when applied directly to high-dimensional spaces. We propose to overcome this problem by incorporating additional information to supervised subspace learning techniques using what is known as tangent vectors. This additional information accounts for the possible differences that the sample data can suffer. In fact, this can be seen as a way to model the unseen data and make better use of the scarce training samples. In this paper, methods for incorporating tangent vector information are described for one classical technique (LDA) and one state-of-the-art technique (SRDA). Experimental results confirm that this additional information improves performance and robustness to known transformations.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, ,