کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
535436 | 870346 | 2014 | 10 صفحه PDF | دانلود رایگان |
• We consider multi-label learning under feature extraction budgets.
• Multi-task lasso is compared with a new greedy forward selection method.
• A computationally efficient training algorithm is presented for the greedy method.
• Greedy selection is shown to have superior performance with small budgets.
We consider the problem of learning sparse linear models for multi-label prediction tasks under a hard constraint on the number of features. Such budget constraints are important in domains where the acquisition of the feature values is costly. We propose a greedy multi-label regularized least-squares algorithm that solves this problem by combining greedy forward selection search with a cross-validation based selection criterion in order to choose, which features to include in the model. We present a highly efficient algorithm for implementing this procedure with linear time and space complexities. This is achieved through the use of matrix update formulas for speeding up feature addition and cross-validation computations. Experimentally, we demonstrate that the approach allows finding sparse accurate predictors on a wide range of benchmark problems, typically outperforming the multi-task lasso baseline method when the budget is small.
Journal: Pattern Recognition Letters - Volume 40, 15 April 2014, Pages 56–65