Article ID Journal Published Year Pages File Type
865508 Tsinghua Science & Technology 2009 10 Pages PDF
Abstract
Support vector machines (SVMs) aim to find an optimal separating hyper-plane that maximizes separation between two classes of training examples (more precisely, maximizes the margin between the two classes of examples). The choice of the cost parameter for training the SVM model is always a critical issue. This analysis studies how the cost parameter determines the hyper-plane; especially for classifications using only positive data and unlabeled data. An algorithm is given for the entire solution path by choosing the 'best' cost parameter while training the SVM model. The performance of the algorithm is compared with conventional implementations that use default values as the cost parameter on two synthetic data sets and two real-world data sets. The results show that the algorithm achieves better results when dealing with positive data and unlabeled classification.
Related Topics
Physical Sciences and Engineering Engineering Engineering (General)
Authors
, , ,