Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
9653440 | Neurocomputing | 2005 | 5 Pages |
Abstract
Support Vector Machines are supervised regression and classification machines which have the nice property of automatically identifying which of the data points are most important in creating the machine. Kernel Principal Component Analysis (KPCA) is a related technique in that it also relies on linear operations in a feature space but does not have this ability to identify important points. Sparse KPCA goes too far in that it identifies a single data point as most important. We show how, by bagging the data, we may create a compromise which gives us a sparse but not grandmother representation for KPCA.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
C. GarcÃa-Osorio, Colin Fyfe,