Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4947824 | Neurocomputing | 2017 | 14 Pages |
Extreme learning machine (ELM) for regression has been used in many fields because of its easy-implementation, fast training speed and good generalization performance. However, basic ELM with â2-norm loss function is sensitive to outliers. Recently, â1-norm loss function and Huber loss function have been used in ELM to enhance the robustness. However, the â1-norm loss function and the Huber loss function can also be effected by outliers because of their linear correlation with the errors. Moreover, existing robust ELM methods only use â2-norm regularization or have no regularization term. In this study, we propose a unified model for robust regularized ELM regression using iteratively reweighted least squares (IRLS), and call it RELM-IRLS. We perform a comprehensive study on the robust loss function and regularization term for robust ELM regression. Four loss functions (i.e., â1-norm, Huber, Bisquare and Welsch) are used to enhance the robustness, and two types of regularization (â2-norm and â1-norm) are used to avoid overfitting. Experiments show that our proposed RELM-IRLS with â2-norm and â1-norm regularization is stable and accurate for data with 0â¼40% outlier levels, and that RELM-IRLS with â1-norm regularization can obtain a compact network because of the highly sparse output weights of the network.