Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
848344 | Optik - International Journal for Light and Electron Optics | 2015 | 6 Pages |
As SVM (support vector machine) has good generalizability, it has been successfully implemented in a variety of applications. Yet in the process of resolving its mathematical model, SVM needs to compute the kernel matrix. The dimension of the kernel matrix is equal to the number of records in the training set, so computing it is very costly in terms of memory. Although some improved algorithms have been proposed to decrease the need for memory, most of these algorithms need iterative computations that cost too much time. Since the existing SVM models fail to perform well regarding both runtime and space needed, we propose a new method to decrease the memory consumption without the need for any iteration. In the method, an effective measure in kernel space is proposed to extract a subset of the database that includes the support vectors. In this way, the number of samples participating in the training process decreases, resulting in an accelerated training process which has a time complexity of only O(nlogn). Another advantage of this method is that it can be used in conjunction with other SVM methods. The experiments demonstrate effectiveness and efficiency of SVM algorithms that are enhanced with the proposed method.