| Article ID | Journal | Published Year | Pages | File Type |
|---|---|---|---|---|
| 530083 | Pattern Recognition | 2013 | 12 Pages |
This paper presents a novel feature-selection algorithm for data regression with a lot of irrelevant features. The proposed method is based on well-established machine-learning technique without any assumption about the underlying data distribution. The key idea in this method is to decompose an arbitrarily complex nonlinear problem into a set of locally linear ones through local information, and to learn globally feature relevance within the least squares loss framework. In contrast to other feature-selection algorithms for data regression, the learning of this method is efficient since the solution can be readily found through gradient descent with a simple update rule. Experiments on some synthetic and real-world data sets demonstrate the viability of our formulation of the feature-selection problem and the effectiveness of our algorithm.
► The paper proposes a local information-based feature-selection algorithm. ► The method decomposes a complex nonlinear problem into a set of locally linear ones. ► The method learns feature relevance globally within the least squares loss framework. ► The method efficiently finds the solution by the simple gradient descent update. ► The method can scale the features of regression data with many irrelevant features.
