کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
388480 | 660926 | 2011 | 6 صفحه PDF | دانلود رایگان |
A two-stage algorithm combining the advantages of adaptive genetic algorithm and modified Newton method is developed for effective training in feedforward neural networks. The genetic algorithm with adaptive reproduction, crossover, and mutation operators is to search for initial weight and bias of the neural network, while the modified Newton method, similar to BFGS algorithm, is to increase network training performance. The benchmark tests show that the two-stage algorithm is superior to many conventional ones: steepest descent, steepest descent with adaptive learning rate, conjugate gradient, and Newton-based methods and is suitable to small network in engineering applications. In addition to numerical simulation, the effectiveness of the two-stage algorithm is validated by experiments of system identification and vibration suppression.
Research highlights
► A two-stage algorithm with the advantages of adaptive genetic algorithm and modified Newton method is developed in feedforward neural networks.
► The genetic algorithm is to search for initial weight and bias of the neural network.
► The modified Newton method, similar to BFGS algorithm, is to increase network training performance.
► The benchmark tests show that the algorithm is superior to others and is suitable to small network in engineering applications.
Journal: Expert Systems with Applications - Volume 38, Issue 10, 15 September 2011, Pages 12189–12194