Article ID Journal Published Year Pages File Type
4947108 Neurocomputing 2017 20 Pages PDF
Abstract
The problem of choosing error penalty parameter C for optimization extreme learning machine (OELM) is that it can take any positive value for different applications and it is therefore hard to choose correctly. In this paper, we reformulated OELM to take a new regularization parameter ν (ν-OELM) which is inspired by Schölkopf et al. The regularization in terms of ν is bounded between 0 and 1, and is easier to interpret as compared to C. This paper shows that: (1) ν-OELM and ν-SVM have similar dual optimization formulation, but ν-OELM has less optimization constraints due to its special capability of class separation and (2) experiment results on both artificial and real binary classification problems show that ν-OELM tends to achieve better generalization performance than ν-SVM, OELM and other popular machine learning approaches, and it is computationally efficient on high dimension data sets. Additionally, the optimal parameter ν in ν-OELM can be easily selected from few candidates.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,