Article ID Journal Published Year Pages File Type
4948174 Neurocomputing 2016 12 Pages PDF
Abstract
Machine learning methods employing positive kernels have been developed and widely used for classification, regression, prediction and unsupervised learning applications, whereby the estimate function takes the form of a weighted-sum kernel expansion. Unacceptable computational burden with large datasets and difficulty in tuning hyperparameters are usually the drawbacks of kernel methods. In order to reduce the computational burden, this paper presents a modified version of the Feature Vector Selection (FVS) method, proposing an approximation of the estimate function as a weighted sum of the predicted values of the Feature Vectors (FVs), where the weights are computed as the oblique projections of the new data points on the FVs in the feature space. Such approximation is, then, obtained by optimizing only the predicted values of the FVs. By defining a least square error optimization problem with equal constraints, analytic solutions of the predicted values of the FVs can be obtained. The proposed method is named Feature Vector Regression (FVR). The tuning of hyperparameters in FVR is also explained in the paper and shown to be less complicated than for other kernel methods. Comparisons with some other popular kernel methods for regression on several public datasets show that FVR, with a small subset of the training dataset (i.e. selected FVs), gives results comparable with those of the methods which give best results in terms of the prediction accuracy. The main contribution of this paper is the new kernel method (i.e. FVR), capable of achieving satisfactory results with reduced efforts because of the small number of hyperparameters to be tuned and the reduced training dataset size used.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,