Article ID Journal Published Year Pages File Type
409002 Neurocomputing 2016 8 Pages PDF
Abstract

This paper investigates a distributed optimization problem associated a time-varying multi-agent network with the presence of delays, where each agent has local access to its convex objective function, and cooperatively minimizes a sum of convex objective functions of the agents over the network. Based on the mirror descent method, we develop a distributed algorithm to solve this problem by exploring the delayed gradient information. Furthermore, we analyze the effects of delayed gradients on the convergence of the algorithm and provide an explicit bound on the convergence rate as a function of the delay parameter, the network size and topology. Our results show that the delays are asymptotically negligible for smooth problems. The proposed algorithm can be viewed as a generalization of the distributed gradient-based projection methods since it utilizes a customized Bregman divergence instead of the usual Euclidean squared distance. Finally, some simulation results on a logistic regression problem are presented to demonstrate the effectiveness of the algorithm.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,