Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6865874 | Neurocomputing | 2015 | 6 Pages |
Abstract
This paper examines conditions under which the Resilient Propagation algorithm, Rprop, fails to converge, identifies limitations of the so-called Globally Convergent Rprop algorithm, GRprop, which was previously thought to guarantee convergence, and considers pathological behaviour of the implementation of GRprop in the neuralnet software package. A new robust convergent back-propagation algorithm, ARCprop, is presented. The new algorithm builds on Rprop, but guarantees convergence by shortening steps as necessary to achieve a sufficient reduction in global error. Simulation results on four benchmark problems from the PROBEN1 collection show that the new algorithm achieves similar levels of performance to Rprop in terms of training speed, training accuracy, and generalization.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Todd M. Bailey,