Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4944218 | Information Sciences | 2017 | 16 Pages |
Abstract
Distribution mismatch between the modeling data and the query data is a known domain adaptation issue in machine learning. To this end, in this paper, we propose a l2,1-norm based discriminative robust kernel transfer learning (DKTL) method for high-level recognition tasks. The key idea is to realize robust domain transfer by simultaneously integrating domain-class-consistency (DCC) metric based discriminative subspace learning, kernel learning in reproduced kernel Hilbert space, and representation learning between source and target domain. The DCC metric includes two properties: domain-consistency used to measure the between-domain distribution discrepancy and class-consistency used to measure the within-domain class separability. The essential objective of the proposed transfer learning method is to maximize the DCC metric, which is equivalently to minimize the domain-class-inconsistency (DCIC), such that domain distribution mismatch and class inseparability are well formulated and unified simultaneously. The merits of the proposed method include (1) the robust sparse coding selects a few valuable source data with noises (outliers) removed during knowledge transfer, and (2) the proposed DCC metric can pursue more discriminative subspaces of different domains. As a result, the maximum class-separability is also well guaranteed. Extensive experiments on a number of visual datasets demonstrate the superiority of the proposed method over other state-of-the-art domain adaptation and transfer learning methods.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Zhang Lei, Yang Jian, Zhang David,