کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
531788 | 869876 | 2016 | 17 صفحه PDF | دانلود رایگان |
• We develop a novel feature transformation method for linear dimensionality reduction.
• The statistics in the transformed subspace are learned to reduce unknown parameters.
• The transformation matrix is obtained via joint optimization of MI and likelihood.
• Our method can maximize between-class separability as well as reduce estimation errors.
• Experimental results show our method performs better than other related methods.
In this paper, we develop a novel feature transformation method for supervised linear dimensionality reduction. Existing methods, e.g., Information Discriminant Analysis (IDA), estimate the first and second order statistics of the data in the original high-dimensional space, and then design the transformation matrix based on the information-theoretic criteria. Unfortunately, such transformation methods are sensitive to the accuracy of the statistics estimation. To overcome this disadvantage, our method describes the statistical structure of the transformed low-dimensional subspace via a linear statistical model, which can reduce the number of unknown parameters, while simultaneously maximizes the mutual information (MI) between the transformed data and their class labels, which can ensure the between-class separability according to the information theory. The key idea is that we seek the optimal model parameters, including the transformation matrix, via the joint optimization of MI function and log-likelihood function, therefore, this method can not only reduce the estimation errors but also maximize the between-class separability. Experimental results based on synthetic dataset and benchmark datasets demonstrate the better performance of our method over other related methods.
Journal: Pattern Recognition - Volume 60, December 2016, Pages 554–570