کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
495469 862827 2014 25 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Differential Evolution algorithms applied to Neural Network training suffer from stagnation
ترجمه فارسی عنوان
الگوریتم تکامل دیفرانسیل اعمال شده به آموزش شبکه عصبی از رکود رنج می برد
کلمات کلیدی
تکامل دیفرانسیل، شبکه عصبی مصنوعی، رکود، بهینه سازی جهانی و محلی، مشکلات معیار، اندازه جمعیت الگوریتم
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر نرم افزارهای علوم کامپیوتر
چکیده انگلیسی


• Differential Evolution algorithms applied to ANN training suffer from stagnation.
• The lack of difference vectors of small magnitude is noted during ANN training by Differential Evolution methods.
• In case of benchmark problems the lack of difference vectors of small magnitude is only occasionally observed.
• DEGL algorithm outperforms other Differential Evolution variants for ANN training.
• Best algorithms found for benchmark problems do not perform well for ANN training.

Large number of population-based Differential Evolution algorithms has been proposed in the literature. Their good performance is often reported for benchmark problems. However, when applied to Neural Networks training for regression, these methods usually perform poorer than classical Levenberg–Marquardt algorithm. The major aim of the present paper is to clarify, why? In this research, in which Neural Networks are used for a real-world regression problem, it is empirically shown that various Differential Evolution algorithms are falling into stagnation during Neural Network training. It means that after some time the individuals stop improving, or improve very occasionally, although the population diversity remains high. Similar behavior of Differential Evolution algorithms is observed for some, but not the majority of, benchmark problems. In the paper the impact of Differential Evolution population size, the initialization range and bounds on Neural Networks performance is also discussed.Among tested algorithms only the Differential Evolution with Global and Local neighborhood-based mutation operators performs better than the Levenberg–Marquardt algorithm for Neural Networks training. This version of Differential Evolution also shows the symptoms of stagnation, but much weaker than the other tested variants. To enhance exploitation in the final stage of Neural Networks training, it is proposed to merge the Differential Evolution with Global and Local neighborhood-based mutation operators algorithm with the Trigonometric mutation operator. This method does not rule out the stagnation problem, but slightly improves the performance of trained Neural Networks.

Figure optionsDownload as PowerPoint slide

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Applied Soft Computing - Volume 21, August 2014, Pages 382–406
نویسندگان
,