Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4947341 | Neurocomputing | 2017 | 25 Pages |
Abstract
Feature selection remove the noisy/irrelevant samples and select the subset of representative features, in general, from the high-dimensional space of data has been a fatal significant technique in computer vision and machine learning. Afterwards, motivated by the interpretable ability of feature selection patterns, beside, and the successful use of low-rank constraint in static and sparse learning in the field of machine learning. We present a novel feature selection model with unsupervised learning by using low-rank regression on loss function, and a sparsity term plus K-means clustering method on regularization term during this article. In order to distinguish from those existing state-of-the-art attribute selection measures, the propose method have described as follows: (1) represent the every feature by other features (including itself) via utilize the corresponding loss function with a feature-level self-express way; (2) embed K-means to generate pseudo class label information for the attribute selection as an pseudo supervised method, because of the supervised learning usually have the better recognition results than unsupervised learning; (3) also use the low-rank constraint to feature selection which considers two aspects of information inherent in data. The low-rank constraint takes the correlation of response variables into account, while an â2, p-norm regularizer considers the correlation between feature vectors and their corresponding response variables. The extensive relevant results of experiment on three multi-model comparison data demonstrated that our new unsupervised feature selection pattern outperforms the related approaches.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Rongyao Hu, Jie Cao, Debo Cheng, Wei He, Yonghua Zhu, Qing Xie, Guoqiu Wen,