Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
488181 | Procedia Computer Science | 2011 | 10 Pages |
There is growing interest in performing ever more complex classification tasks on mobile and embedded devices in real-time, which results in the need for e_cient implementations of the respective algorithms. Support vector machines (SVMs) represent a powerful class of nonlinear classifiers, and reducing the working precision represents a promising approach to achieving e_cient implementations of the SVM classification phase. However, the relationship between SVM classification accuracy and the arithmetic precision used is not yet su_ciently understood. We investigate this relationship in floating-point arithmetic and illustrate that often a large reduction in the working precision of the classification process is possible without loss in classification accuracy. Moreover, we investigate the adaptation of bounds on allowable SVM parameter perturbations in order to estimate the lowest possible working precision in floating-point arithmetic. Among the three representative data sets considered in this paper, none requires a precision higher than 15 bit, which is a considerable reduction from the 53 bit used in double precision floating-point arithmetic. Furthermore, we demonstrate analytic bounds on the working precision for SVMs with Gaussian kernel providing good predictions of possible reductions in the working precision without sacrificing classification accuracy.