Article ID Journal Published Year Pages File Type
711554 IFAC-PapersOnLine 2015 6 Pages PDF
Abstract

Both problems of learning and optimization in context of huge amounts of data became very important nowadays. Unfortunately even online and parallel optimization methods may fail to handle such amounts of data and fit into time limits. For this case distributed optimization methods may be the only solution. In this paper we consider particular type of optimization problem in distributed setting. We propose an algorithm substantially based on the distributed stochastic gradient descent method proposed in Zinkevich et al. (2010). Finally we experimentally study properties of the proposed algorithm and demonstrate its superiority for particular type of optimization problem.

Related Topics
Physical Sciences and Engineering Engineering Computational Mechanics