کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
393211 | 665578 | 2015 | 11 صفحه PDF | دانلود رایگان |
Novel Evolutionary Algorithms are usually tested on sets of artificially-constructed benchmark problems. Such problems are often created to make the search of one global extremum (usually minimum) tricky. In this paper it is shown that benchmarking heuristics on either minimization or maximization of the same set of artificially-created functions (with equal bounds and number of allowed function calls) may lead to very different ranking of tested algorithms. As Evolutionary Algorithms and other heuristic optimizers are developed in order to be applicable to real-world problems, such result may raise doubts on the practical meaning of benchmarking them on artificial functions, as there is little reason that searching for the minimum of such functions should be more important than searching for their maximum.Thirty optimization heuristics, including a number of variants of Differential Evolution, as well as other kinds of Evolutionary Algorithms, Particle Swarm Optimization, Direct Search methods and – following the idea borrowed from No Free Lunch – pure random search are tested in the paper. Some discussion regarding the choice of the mean or the median performance for comparison is addressed and a short debate on the overall performance of particular methods is given.
Journal: Information Sciences - Volume 297, 10 March 2015, Pages 191–201