Article ID Journal Published Year Pages File Type
486802 Procedia Computer Science 2010 9 Pages PDF
Abstract

A two-class Support Vector Machine (SVM) classifier finds a hyperplane that separates two classes of data with the maximum margin. In a first learning phase, SVM involves the construction and solution of a primal-dual interiorpoint optimization problem. Each iteration of the interior-point method (IPM) requires a sparse linear system solution, which dominates the execution time of SVM learning. Solving this linear system can often be computationally expensive depending on the conditioning of the matrix. Consequently, preconditioned linear systems can lead to improved SVM performance while maintaining the classification accuracy. In this paper, we seek to characterize the role of preconditioning schemes for enhancing the SVM classifier performance. We compare and report on the solution time, convergence, and number of Newton iterations of the iterior-point method and classification accuracy of the SVM for 6 well-accepted preconditioning schemes and datasets chosen from well-known machine learning repositories. In particular, we introduce Δ-IPM that sparsifies the linear system at each iteration of the IPM. Our results indicate that on average the Jacobi and SSOR preconditioners perform 10.01 times better than other preconditioning schemes for IPM and 8.83 times better for Δ-IPM. Also, across all datasets Jacobi and SSOR perform between 2 to 30 times better than other schemes in both IPM and Δ-IPM. Moreover, Δ-IPM obtains a speedup over IPM performance on average by 1.25 and as much as 2 times speedup in the best case.

Related Topics
Physical Sciences and Engineering Computer Science Computer Science (General)