Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6894416 | European Journal of Operational Research | 2018 | 36 Pages |
Abstract
In this article, we propose a new method for multiobjective optimization problems in which the objective functions are expressed as expectations of random functions. The present method is based on an extension of the classical stochastic gradient algorithm and a deterministic multiobjective algorithm, the Multiple Gradient Descent Algorithm (MGDA). In MGDA a descent direction common to all specified objective functions is identified through a result of convex geometry. The use of this common descent vector and the Pareto stationarity definition into the stochastic gradient algorithm makes the algorithm able to solve multiobjective problems. The mean square and almost sure convergence of this new algorithm are proven considering the classical stochastic gradient algorithm hypothesis. The algorithm efficiency is illustrated on a set of benchmarks with diverse complexity and assessed in comparison with two classical algorithms (NSGA-II, DMS) coupled with a Monte Carlo expectation estimator.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Science (General)
Authors
Quentin Mercier, Fabrice Poirion, Jean-Antoine Désidéri,