کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
5128300 | 1378588 | 2016 | 12 صفحه PDF | دانلود رایگان |
Support Vector Machines (SVM's) are ubiquitous and attracted a huge interest in the last years. Their training involves the definition of a suitable optimization model with two main features: (1) its optimal solution estimates the a posteriori optimal SVM parameters in a reliable way and (2) it can be solved efficiently. Hinge-loss models, among others, have been used with remarkable success together with cross validation-the latter being instrumental to the success of the overall training, though it can become very time consuming. In this paper we propose a different model for SVM training, that seems particularly suited when the Gaussian kernel is adopted (as it is often the case). Our approach is to model the overall training problem as a whole, thus avoiding the need of cross validation. Though our basic model is an NP-hard Mixed-Integer Linear Program, some variants can be solved very efficiently by simple sorting algorithms. Computational results on test cases from the literature are presented, showing that our training method can lead to a classification accuracy comparable (or even slightly better) than the classical hinge-loss model, with a speedup of 2-3 orders of magnitude.
Journal: Discrete Optimization - Volume 22, Part A, November 2016, Pages 183-194