Article ID Journal Published Year Pages File Type
410700 Neurocomputing 2011 9 Pages PDF
Abstract

In this paper a new learning algorithm is proposed for the problem of simultaneous learning of a function and its derivatives as an extension of the study of error minimized extreme learning machine for single hidden layer feedforward neural networks. Our formulation leads to solving a system of linear equations and its solution is obtained by Moore–Penrose generalized pseudo-inverse. In this approach the number of hidden nodes is automatically determined by repeatedly adding new hidden nodes to the network either one by one or group by group and updating the output weights incrementally in an efficient manner until the network output error is less than the given expected learning accuracy. For the verification of the efficiency of the proposed method a number of interesting examples are considered and the results obtained with the proposed method are compared with that of other two popular methods. It is observed that the proposed method is fast and produces similar or better generalization performance on the test data.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,