Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
410602 | Neurocomputing | 2009 | 9 Pages |
Abstract
Fast SVM training is an important goal for which many proposals have been given in the literature. In this work we will study from a geometrical point of view the presence, in both the Mitchell–Demyanov–Malozemov (MDM) algorithm and Platt's Sequential Minimal Optimization, of training cycles, that is, the repeated selection of some concrete updating patterns. We shall see how to take advantage of these cycles by partially collapsing them in a single updating vector that gives better minimizing directions. We shall numerically illustrate the resulting procedure, showing that it can lead to substantial savings in the number of iterations and kernel operations for both algorithms.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Álvaro Barbero, Jorge López, José R. Dorronsoro,