Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6854886 | Expert Systems with Applications | 2018 | 13 Pages |
Abstract
The paper deals with unconstrained face recognition task for the small sample size problem based on computation of distances between high-dimensional off-the-shelf features extracted by deep convolution neural network. We present the novel statistical recognition method, which maximizes the likelihood (joint probabilistic density) of the distances to all reference images from the gallery set. This likelihood is estimated with the known asymptotically normal distribution of the Kullback-Leibler discrimination between nonnegative features. Our approach penalizes the individuals if their feature vectors do not behave like the features of observed image in the space of dissimilarities of the gallery images. We provide the experimental study with the LFW (Labeled Faces in the Wild), YTF (YouTube Faces) and IJB-A (IARPA Janus Benchmark A) datasets and the state-of-the-art deep learning-based feature extractors (VGG-Face, VGGFace2, ResFace-101, CenterFace and Light CNN). It is demonstrated, that the proposed approach can be applied with traditional distances in order to increase accuracy in 0.3-5.5% when compared to known methods, especially if the training and testing images are significantly different.
Keywords
LFWMLPLBPAUCPCAKullback–Leibler divergenceLocal binary patternsMaximum likelihood estimationLinear discriminant analysisPrincipal component analysisLDAStatistical pattern recognitionHogMaximum a posterioriKullback-Leibler DivergenceCNNConvolution neural networkSVMSupport vector machinearea under curvenearest neighbormaphistogram of oriented gradientsMulti-layered perceptron
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Andrey V. Savchenko, Natalya S. Belova,