Article ID Journal Published Year Pages File Type
529807 Journal of Visual Communication and Image Representation 2014 10 Pages PDF
Abstract

•Linear Regression Classification (LRC) is extended to undersampled situation.•Intraclass variant dictionary is adopted to represent training/testing variation.•Quasi-inverse, ridge regularization and SVD, are designed to solve low-rank problem.•Experiments show that ELRC has better generalization ability and is more robust.

Linear Regression Classification (LRC) is a newly-appeared pattern recognition method, which formulates the recognition problem in terms of class-specific linear regression with sufficient training samples per class. In this paper, we extend LRC via intraclass variant dictionary and SVD to undersampled face recognition where there are very few, or even only one, training sample per class. Intraclass variant dictionary is adopted in undersampled situation to represent the possible variation between the training and testing samples. Three types of methods, quasi-inverse, ridge regularization and Singular Value Decomposition (SVD), are designed to solve low-rank problem of data matrix. Then the whole algorithm, named Extended LRC (ELRC), is presented for face recognition via intraclass variant dictionary and SVD. The experimental results on three well-known face databases show that the proposed ELRC has better generalization ability and is more robust to classification than many state-of-the-art methods in undersampled situation.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , ,