Article ID Journal Published Year Pages File Type
536211 Pattern Recognition Letters 2015 7 Pages PDF
Abstract

•We use kernel sparse coding in the context of transfer learning.•We maximize the correlation between source and target sparse coding of the same class.•A unified framework is presented to learn the dictionary and transfer sparse coding.

When there are a few labeled images, the classifier trained performs poorly even we use sparse coding technique to process image features. So we utilize other data from related domains as source data to help classification tasks. In this paper, we propose a Supervised Transfer Kernel Sparse Coding (STKSC) algorithm to construct discriminative sparse representations for cross domain image classification tasks. Specifically, we map source and target data into a high dimensional feature space by using kernel trick, hence capturing the nonlinear image features. In order to make the sparse representations robust to the domain mismatch, we incorporate the Maximum Mean Discrepancy (MMD) criterion into the objective function of kernel sparse coding. We also use label information to learn more discriminative sparse representations. Furthermore, we provide a unified framework to learn the dictionary and the discriminative sparse representations, which can be further used for classification. The experiment results validate that our method outperforms many state-of-art methods.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , , ,