Article ID Journal Published Year Pages File Type
4944897 Information Sciences 2016 25 Pages PDF
Abstract
Over the last two decades numerous metaheuristics have been proposed and it seems today that nobody is able to understand, evaluate, or compare them all. In principle, optimization methods, including the recently popular Evolutionary Computation or Swarm Intelligence-based ones, should be developed in order to solve real-world problems. Yet the vast majority of metaheuristics are tested in the source papers on artificial benchmarks only, so their usefulness for various practical applications remains unverified. As a result, choosing the proper method for a particular real-world problem is a difficult task. This paper shows that such a choice is even more complicated if one wishes, with good reason, to use metaheuristics twice, once to find the best and then to find the worst solutions for the specific numerical real-world problem. It often occurs that for either case different optimizers are to be recommended. The above finding is based on testing 30 metaheuristics on numerical real-world problems from CEC2011. First we solve 22 minimization problems as defined for CEC2011. Then we reverse the objective function for each problem and search for its maximizing solution. We also observe that algorithms that are highly ranked on average may not perform best for any given specific problem. Rather, the highest average ranking may be achieved by methods that are never among the poorest ones. In other words, occasional winners may get less attention than rare losers.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,