Article ID Journal Published Year Pages File Type
8965196 Neurocomputing 2018 29 Pages PDF
Abstract
Incremental extreme learning machine has been proved to be an efficient and simple universal approximator. However, the network architecture may be very large due to the inefficient nodes which have a tiny effect on reducing the residual error. More to the point, the output weights are not the least square solution. To reduce such inefficient nodes, a method called bidirectional ELM (B-ELM), which analytically calculates the input weights of even nodes, was proposed. By analyzing, B-ELM can be further improved to achieve better performance on compacting structure. This paper proposes the modified B-ELM (MB-ELM), in which the orthogonalization method is involved in B-ELM to orthogonalize the output vectors of hidden nodes and the resulting vectors are taken as the output vectors. MB-ELM can greatly diminish inefficient nodes and obtain a preferable output weight vector which is the least square solution, so that it has better convergence rate and a more compact network architecture. Specifically, it has been proved that in theory, MB-ELM can reduce residual error to zero by adding only two nodes into network. Simulation results verify these conclusions and show that MB-ELM can reach smaller low limit of residual error than other I-ELM methods.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,