|کد مقاله||کد نشریه||سال انتشار||مقاله انگلیسی||ترجمه فارسی||نسخه تمام متن|
|412401||679637||2013||8 صفحه PDF||سفارش دهید||دانلود رایگان|
Least squares support vector machines (LS-SVMs) express the training in terms of solving a system of linear equations or an equivalent quadratic program (QP) with one linear equality constraint, in contrast to a QP with lower and upper bounds and one linear equality constraint for conventional support vector machines (SVMs). But for large scale problems, the presence of the linear equality constraint impedes the applications of some well developed methods. In this paper, we first eliminate the linear equality constraint of the QP in training LS-SVM and make it an unconstrained one, then propose a fast iterative single data approach with stepsize acceleration to the unconstrained QP. As a result of combining the selection rule of variables with the coordinate descent approach, the proposed approach is superior to the successive over-relaxation (SOR) method. Meanwhile updating only one variable at each iteration makes the proposed approach simpler and more flexible than the sequential minimal optimization (SMO) method. Computational experiment results on several benchmark data sets show that the proposed approach is more efficient than the existing single data approach and the SMO methods.
► The minimization problem for LS-SVM is transformed into an unconstrained one.
► We suggest an iterative single data approach to training the unconstrained LS-SVM.
► The stepsize accelerating implementation is incorporated to speed up the training process.
Journal: Neurocomputing - Volume 115, 4 September 2013, Pages 31–38