Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4944350 | Information Sciences | 2017 | 18 Pages |
Abstract
Large scale optimization is an active research area in which many algorithms, benchmark functions, and competitions have been proposed to date. However, extremely high-dimensional optimization problems comprising millions of variables demand new approaches to perform effectively in results quality and efficiently in time. Memetic algorithms are popular in continuous optimization but they are hampered on such extremely large dimensionality due to the limitations of computational and memory resources, and heuristics must tackle the immensity of the search space. This work advances on how the MapReduce parallel programming model allows scaling to problems with millions of variables, and presents an adaptation of the MA-SW-Chains algorithm to the MapReduce framework. Benchmark functions from the IEEE CEC 2010 and 2013 competitions are considered and results with 1, 3 and 10 million variables are presented. MapReduce demonstrates to be an effective approach to scale optimization algorithms on extremely high-dimensional problems, taking advantage of the combined computational and memory resources distributed in a computer cluster.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Alberto Cano, Carlos GarcÃa-MartÃnez, Sebastián Ventura,