کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
409437 | 679072 | 2006 | 6 صفحه PDF | دانلود رایگان |
عنوان انگلیسی مقاله ISI
Training sparse MS-SVR with an expectation-maximization algorithm
دانلود مقاله + سفارش ترجمه
دانلود مقاله ISI انگلیسی
رایگان برای ایرانیان
موضوعات مرتبط
مهندسی و علوم پایه
مهندسی کامپیوتر
هوش مصنوعی
پیش نمایش صفحه اول مقاله

چکیده انگلیسی
The solution of multi-scale support vector regression (MS-SVR) with the quadratic loss function can be obtained by solving a time-consuming quadratic programming (QP) problem and a post-processing. This paper adapts an expectation-maximization (EM) algorithm based on two 2-level hierarchical-Bayes models, which implement the l1l1-norm and the l0l0-norm regularization term asymptotically, to fast train MS-SVR. Experimental results illuminate that the EM algorithm is faster than the QP algorithm for large data sets, the l0l0-norm regularization term promotes a far sparser solution than the l1l1-norm, and the good performance of MS-SVR should be attributed to the multi-scale kernels and the regularization terms.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neurocomputing - Volume 69, Issues 13–15, August 2006, Pages 1659–1664
Journal: Neurocomputing - Volume 69, Issues 13–15, August 2006, Pages 1659–1664
نویسندگان
D.N. Zheng, J.X. Wang, Y.N. Zhao,