Article ID Journal Published Year Pages File Type
409437 Neurocomputing 2006 6 Pages PDF
Abstract

The solution of multi-scale support vector regression (MS-SVR) with the quadratic loss function can be obtained by solving a time-consuming quadratic programming (QP) problem and a post-processing. This paper adapts an expectation-maximization (EM) algorithm based on two 2-level hierarchical-Bayes models, which implement the l1l1-norm and the l0l0-norm regularization term asymptotically, to fast train MS-SVR. Experimental results illuminate that the EM algorithm is faster than the QP algorithm for large data sets, the l0l0-norm regularization term promotes a far sparser solution than the l1l1-norm, and the good performance of MS-SVR should be attributed to the multi-scale kernels and the regularization terms.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,