Article ID Journal Published Year Pages File Type
7124875 Measurement 2014 7 Pages PDF
Abstract
The learning ability and generalizing performance of the support vector machine (SVM) mainly relies on the reasonable selection of super-parameters. When the scale of the training sample set is large and the parameter space is huge, the existing popular super-parameter selection methods are impractical due to high computational complexity. In this paper, a novel super-parameter selection method for SVM with a Gaussian kernel is proposed, which can be divided into the following two stages. The first one is choosing the kernel parameter to ensure a sufficiently large number of potential support vectors retained in the training sample set. The second one is screening out outliers from the training sample set by assigning a special value to the penalty factor, and training out the optimal penalty factor from the remained training sample set without outliers. The whole process of super-parameter selection only needs two train-validate cycles. Therefore, the computational complexity of our method is low. The comparative experimental results concerning 8 benchmark datasets show that our method possesses high classification accuracy and desirable training time.
Related Topics
Physical Sciences and Engineering Engineering Control and Systems Engineering
Authors
, , ,