Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
408422 | Neurocomputing | 2016 | 12 Pages |
•The sparse optimal scoring problem using the zero-norm is considered in high dimensional setting.•The difficulty in treating the zero-norm is overcome by using two DC approximation functions.•Alternating schemes based on DCA (DC Algorithms) are proposed.•Four DCA schemes are investigated for subproblems that result to four versions of the alternating method using DCA.•The proposed algorithms outperform five standard ones on both synthetic and benchmark datasets.
Linear Discriminant Analysis (LDA) is a standard tool for classification and dimension reduction in many applications. However, the problem of high dimension is still a great challenge for the classical LDA. In this paper we consider the supervised pattern classification in the high dimensional setting, in which the number of features is much larger than the number of observations and present a novel approach to the sparse optimal scoring problem using the zero-norm. The difficulty in treating the zero-norm is overcome by using appropriate continuous approximations such that the resulting problems are solved by alternating schemes based on DC (Difference of Convex functions) programming and DCA (DC Algorithms). The experimental results on both simulated and real datasets show the efficiency of the proposed algorithms compared to the five state-of-the-art methods.