کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
532050 | 869898 | 2015 | 13 صفحه PDF | دانلود رایگان |
![عکس صفحه اول مقاله: Linear convergence rate for the MDM algorithm for the Nearest Point Problem Linear convergence rate for the MDM algorithm for the Nearest Point Problem](/preview/png/532050.png)
• Linear convergence rate for the MDM algorithm is proved.
• No particular assumption is needed for the proof.
• A geometric and simple approach is followed.
• This may pave the way for proving similar results for the SMO algorithm used to train SVMs.
In this paper we will prove a linear convergence rate for the extension of the Mitchell, Dem׳yanov and Malozemov (MDM) algorithm for solving the Nearest Point Problem (NPP). While linear convergence proofs for the related (but different) SMO method intended for SVM training require that the kernel matrix be positive definite, no such assumption is needed in NPP for MDM. Moreover, we will also show linear convergence for the sequence of MDM vectors to the unique solution vector W⁎ of NPP and for a quantity that measures the gap in the Karush–Kuhn–Tucker conditions. Furthermore, even if there might be several multiplier representations for W⁎, we will show that any MDM-generated multiplier sequence converges linearly to an optimal multiplier. This linear convergence is shown to be optimal and it is also numerically illustrated over six datasets. We will follow an approach that relies on a geometric point of view that yields a simple path to the proofs.
Journal: Pattern Recognition - Volume 48, Issue 4, April 2015, Pages 1510–1522