Article ID Journal Published Year Pages File Type
380231 Engineering Applications of Artificial Intelligence 2016 8 Pages PDF
Abstract

•We propose a kernel choice method for domain adaption.•We reduce the distribution mismatch based on the Maximum Mean Discrepancy.•Given an upper bound on Type I error, our method minimizes the Type II error.•We apply our method to classification and evaluate on two datasets.

In this paper, a kernel choice method is proposed for domain adaption, referred to as Optimal Kernel Choice Domain Adaption (OKCDA). It learns a robust classier and parameters associated with Multiple Kernel Learning side by side. Domain adaption kernel-based learning strategy has shown outstanding performance. It embeds two domains of different distributions, namely, the auxiliary and the target domains, into Hilbert Space, and exploits the labeled data from the source domain to train a robust kernel-based SVM classier for the target domain. We reduce the distributions mismatch by setting up a test statistic between the two domains based on the Maximum Mean Discrepancy (MMD) algorithm and minimize the Type II error, given an upper bound on error I. Simultaneously, we minimize the structural risk functional. In order to highlight the advantages of the proposed method, we tackle a text classification problem on 20 Newsgroups dataset and Email Spam dataset. The results demonstrate that our method exhibits outstanding performance.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , , , ,