Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
410782 | Neurocomputing | 2008 | 5 Pages |
Abstract
Kernel machines have been widely used in learning. However, standard algorithms are often time consuming. To this end, we propose a new method, direct simplification (DS) for imposing the sparsity of kernel regression machines. Different to the existing sparse methods, DS performs approximation and optimization in a unified framework by incrementally finding a set of basis functions that minimizes the primal risk function directly. The main advantage of our method lies in its ability to form very good approximations for kernel regression machines with a clear control on the computation complexity as well as the training time. Experiments on two real time series and two benchmarks assess the feasibility of our method and show that DS can obtain better performance with fewer bases compared with two-step-type sparse method.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Wenwu He, Zhizhong Wang,