Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
5499618 | Chaos, Solitons & Fractals | 2017 | 5 Pages |
Abstract
An improved recursive Levenberg-Marquardt algorithm (RLM) is proposed to more efficiently train neural networks. The error criterion of the RLM algorithm was modified to reduce the impact of the forgetting factor on the convergence of the algorithm. The remedy to apply the matrix inversion lemma in the RLM algorithm was extended from one row to multiple rows to improve the success rate of the convergence; after that, the adjustment strategy was modified based on the extended remedy. Finally, the performance of this algorithm was tested on two chaotic systems. The results show improved convergence.
Related Topics
Physical Sciences and Engineering
Physics and Astronomy
Statistical and Nonlinear Physics
Authors
Shi Xiancheng, Feng Yucheng, Zeng Jinsong, Chen Kefu,