Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
406141 | Neurocomputing | 2016 | 13 Pages |
Extreme learning machine (ELM) uses a non-iterative method to train single-hidden-layer feed-forward networks (SLFNs), which has been proven to be an efficient and effective learning model for both classification and regression. The main advantage of ELM lies in that the input weights as well as the hidden layer biases can be randomly generated, which contributes to the analytical solution of output weights. In this paper, we propose a discriminative manifold ELM (DMELM) by simultaneously considering the discriminative information and geometric structure of data; specifically, we exploit the discriminative information in the local neighborhood around each data point. To this end, a graph regularizer based on a newly designed graph Laplacian to characterize both properties is formulated and incorporated into the ELM objective. In DMELM, the output weights can also be obtained in analytical form. Extensive experiments are conducted on image and EEG signal classification to evaluate the effectiveness of DMELM. The results show that DMELM consistently achieves better performance than original ELM and yields promising results in comparison with several state-of-the-art algorithms, which suggests that both the discriminative as well as manifold information are beneficial to classification.