Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6866517 | Neurocomputing | 2014 | 10 Pages |
Abstract
Generative score spaces have recently received increasing attention due to their state-of-the-art performance in a wide range of recognition tasks. These methods model the distribution of the training data using probabilistic generative models and derive the feature for each sample based on the generative models. The derived feature encodes the information of the sample, hidden variables and model parameters for classification, providing a staged way to integrate the abilities of generative models in inferring hidden information and discriminative models in classification. The underlying point is that the hidden information carried by hidden variables in generative models is informative and useful in classification. In this paper, we propose a general extension for the existing score space methods to exploit class label that encodes rich discriminative information, when deriving feature mappings. This is achieved by extending the regular generative models to class conditional models over both observed variable and class label, and deriving feature mapping over such extended models. The resulted methods take simple and intuitive forms which are weighted versions of existing methods, benefitting from the Bayesian inference of class label. The empirical evaluation over two typical generative models and 6 datasets shows its significant improvement over existing methods.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Bin Wang, Cungang Wang, Yuncai Liu,