Article ID Journal Published Year Pages File Type
407841 Neurocomputing 2014 9 Pages PDF
Abstract

Learning a classifier when only knowing the features and marginal distribution of class labels in each of the data groups is both theoretically interesting and practically useful. Specifically, we consider the case in which the ratio of the number of data instances to the number of classes is large. We prove sample complexity upper bound in this setting, which is inspired by an analysis of existing algorithms. We further formulate the problem in a density estimation framework to learn a generative classifier. We also develop a practical RBM-based algorithm which shows promising performance on benchmark datasets.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , , , ,