| کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
|---|---|---|---|---|
| 535101 | 870320 | 2016 | 7 صفحه PDF | دانلود رایگان |
• We present a new approach for fully unsupervised domain adaptation.
• We prove consistency theorems for the proposed approach.
• We propose a new learning bound for domain adaptation.
• We show that our approach outperforms several recent domain adaptation methods.
• Our approach minimizes the distance between source and target tasks distributions.
Domain adaptation is a field of machine learning that addresses the problem occurring when a classifier is trained and tested on domains from different distributions. This kind of paradigm is of vital importance as it allows a learner to generalize the knowledge across different tasks. In this paper, we present a new method for fully unsupervised domain adaptation that seeks to align two domains using a shared set of basis vectors derived from eigenvectors of each domain. We use non-negative matrix factorization (NMF) techniques to generate a non-negative embedding that minimizes the distance between projections of source and target data. We present a theoretical justification for our approach by showing the consistency of the similarity function defined using the obtained embedding. We also prove a theorem that relates the source and target domain errors using kernel embeddings of distribution functions. We validate our approach on benchmark data sets and show that it outperforms several state-of-art domain adaptation methods.
Journal: Pattern Recognition Letters - Volume 77, 1 July 2016, Pages 35–41
