Article ID Journal Published Year Pages File Type
392026 Information Sciences 2015 12 Pages PDF
Abstract

Due to the difficulties associated with the collection of 3D samples, 3D face recognition technologies often have to work with smaller than desirable sample sizes. With the aim of enlarging the training number for each subject, we divide each training image into several patches. However, this immediately introduces two further problems for 3D models: high computational cost and dispersive features caused by the divided 3D image patches. We therefore first map 3D face images into 2D depth images, which greatly reduces the dimension of the samples. Though the depth images retain most of the robust features of 3D images, such as pose and illumination invariance, they lose many discriminative features of the original 3D samples. In this study, we propose a Bayesian learning framework to extract the discriminative features from the depth images. Specifically, we concentrate the features of the intra-class patches to a mean feature by maximizing the multivariate Gaussian likelihood function, and, simultaneously, enlarge the distances between the inter-class mean features by maximizing the exponential priori distribution of the mean features. For classification, we use the nearest neighbor classifier combined with the Mahalanobis distance to calculate the distance between the features of the test image and items in the training set. Experiments on two widely-used 3D face databases demonstrate the efficiency and accuracy of our proposed method compared to relevant state-of-the-art methods.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,