Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
410865 | Neurocomputing | 2011 | 8 Pages |
Abstract
The Maximal Discrepancy (MD) is a powerful statistical method, which has been proposed for model selection and error estimation in classification problems. This approach is particularly attractive when dealing with small sample problems, since it avoids the use of a separate validation set. Unfortunately, the MD method requires a bounded loss function, which is usually avoided by most learning algorithms, including the Support Vector Machine (SVM), because it gives rise to a non-convex optimization problem. We derive in this work a new approach for rigorously applying the MD technique to the error estimation of the SVM and, at the same time, preserving the original SVM framework.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Davide Anguita, Alessandro Ghio, Sandro Ridella,