Article ID Journal Published Year Pages File Type
4947110 Neurocomputing 2017 9 Pages PDF
Abstract
The random assignment strategy for input weights has brought extreme learning machine (ELM) many advantages such as fast learning speed, minimal manual intervention and so on. However, the Monte Carlo (MC) based random sampling method that is typically used to generate input weights of ELM has poor capability of sample structure preserving (SSP), which will degenerate the learning and generalization performance. For this reason, the Quasi-Monte Carlo (QMC) method is revisited and it is shown that the distortion error of QMC projection can obtain faster convergence rate than that of MC for relatively low-dimensional problems. Further, a unified random orthogonal (RO) projection method is proposed, and it is shown that such RO method can always provide the optimal transformation in terms of minimizing the loss of all the distances between samples. Experimental results on real-world benchmark data sets verify the rationality of theoretical analysis and indicate that by enhancing the SSP capability of input weights, QMC and RO projection methods tend to bring ELM algorithms better generalization performance.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,