Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6422161 | Applied Mathematics and Computation | 2011 | 21 Pages |
Abstract
In this paper we propose a nonmonotone approach to recurrent neural networks training for temporal sequence processing applications. This approach allows learning performance to deteriorate in some iterations, nevertheless the network's performance is improved over time. A self-scaling BFGS is equipped with an adaptive nonmonotone technique that employs approximations of the Lipschitz constant and is tested on a set of sequence processing problems. Simulation results show that the proposed algorithm outperforms the BFGS as well as other methods previously applied to these sequences, providing an effective modification that is capable of training recurrent networks of various architectures.
Related Topics
Physical Sciences and Engineering
Mathematics
Applied Mathematics
Authors
Chun-Cheng Peng, George D. Magoulas,