Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6863625 | Neurocomputing | 2018 | 23 Pages |
Abstract
Extreme learning machine (ELM) can be considered as a single-hidden layer feedforward neural network (FNN)-type learning system, whose input weights and hidden layer biases are randomly assigned, while output weights need tuning. In the framework of regression, a fundamental problem of ELM learning is whether the ELM estimator is universally consistent, that is, whether it can approximate arbitrary regression function to any accuracy, provided the number of training samples is sufficiently large. The aim of this paper is two-fold. One is to verify the strongly universal consistency of the ELM estimator, and the other is to present a sufficient and the necessary condition for the activation function, where the corresponding ELM estimator is strongly universally consistent. The obtained results underlie the feasibility of ELM and provide a theoretical guidance of the selection of activation functions in ELM learning.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Xia Liu, Lin Xu,