Article ID Journal Published Year Pages File Type
6864375 Neurocomputing 2018 46 Pages PDF
Abstract
An extreme learning machine (ELM) is a popular analytic single hidden layer feedforward neural network because of its rapid learning capacity. However, vanilla dense ELMs are affected by the overfitting problem when the number of hidden neurons is high. Further direct consequences of the density are decreases in both the training and prediction speeds. In this study, we propose an incremental method for sparsifying the ELM using a newly devised indicator driven by the condition number in the ELM design matrix, in which we call sparse pseudoinverse incremental-ELM (SPI-ELM). SPI-ELM exhibits better generalization performance and lower run-time complexity compared with ELM. However, the sparsification process may negatively affect the learning speed of SPI-ELM; thus, we introduce an iterative matrix decomposition algorithm to address this issue. We also demonstrate that there is a useful relationship between the condition number in the ELM design matrix and the number of hidden neurons. This relationship helps to understand the random weights and nonlinear activation functions in ELMs. We evaluated the SPI-ELM method based on 20 benchmark data sets from the University of California Irvine repository and three real-world databases from the computer vision domain.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,