Article ID Journal Published Year Pages File Type
408310 Neurocomputing 2016 9 Pages PDF
Abstract

Multiple kernel learning method has more advantages over the single one on the model’s interpretability and generalization performance. The existing multiple kernel learning methods usually solve SVM in the dual which is equivalent to the primal optimization. Research shows solving in the primal achieves faster convergence rate than solving in the dual. This paper provides a novel LP-norm(P>1) constraint non-spare multiple kernel learning method which optimizes the objective function in the primal. Subgradient and Quasi-Newton approach are used to solve standard SVM which possesses superlinear convergence property and acquires inverse Hessian without computing a second derivative, leading to a preferable convergence speed. Alternating optimization method is used to solve SVM and to learn the base kernel weights. Experiments show that the proposed algorithm converges rapidly and that its efficiency compares favorably to other multiple kernel learning algorithms.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,