کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
495469 | 862827 | 2014 | 25 صفحه PDF | دانلود رایگان |
• Differential Evolution algorithms applied to ANN training suffer from stagnation.
• The lack of difference vectors of small magnitude is noted during ANN training by Differential Evolution methods.
• In case of benchmark problems the lack of difference vectors of small magnitude is only occasionally observed.
• DEGL algorithm outperforms other Differential Evolution variants for ANN training.
• Best algorithms found for benchmark problems do not perform well for ANN training.
Large number of population-based Differential Evolution algorithms has been proposed in the literature. Their good performance is often reported for benchmark problems. However, when applied to Neural Networks training for regression, these methods usually perform poorer than classical Levenberg–Marquardt algorithm. The major aim of the present paper is to clarify, why? In this research, in which Neural Networks are used for a real-world regression problem, it is empirically shown that various Differential Evolution algorithms are falling into stagnation during Neural Network training. It means that after some time the individuals stop improving, or improve very occasionally, although the population diversity remains high. Similar behavior of Differential Evolution algorithms is observed for some, but not the majority of, benchmark problems. In the paper the impact of Differential Evolution population size, the initialization range and bounds on Neural Networks performance is also discussed.Among tested algorithms only the Differential Evolution with Global and Local neighborhood-based mutation operators performs better than the Levenberg–Marquardt algorithm for Neural Networks training. This version of Differential Evolution also shows the symptoms of stagnation, but much weaker than the other tested variants. To enhance exploitation in the final stage of Neural Networks training, it is proposed to merge the Differential Evolution with Global and Local neighborhood-based mutation operators algorithm with the Trigonometric mutation operator. This method does not rule out the stagnation problem, but slightly improves the performance of trained Neural Networks.
Figure optionsDownload as PowerPoint slide
Journal: Applied Soft Computing - Volume 21, August 2014, Pages 382–406