Article ID Journal Published Year Pages File Type
407989 Neurocomputing 2011 9 Pages PDF
Abstract

We introduce a kernel learning algorithm, called kernel propagation (KP), to learn a nonparametric kernel from a mixture of a few pairwise constraints and plentiful unlabeled samples. Specifically, KP consists of two stages: the first is to learn a small-sized sub-kernel matrix just restricted to the samples with constrains, and the second is to propagate this learned sub-kernel matrix into a large-sized full-kernel matrix over all samples. As an interesting fact, our approach exposes a natural connection between KP and label propagation (LP), that is, one LP can naturally induce its KP counterpart. Thus, we develop three KPs from the three typical LPs correspondingly. Following the idea in KP, we also naturally develop an out-of-sample extension to directly capture a kernel matrix for outside-training data without the need of relearning. The final experiments verify that our developments are more efficient, more error-tolerant and also comparably effective in comparison with the state-of-the-art algorithm.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,