Article ID Journal Published Year Pages File Type
4944913 Information Sciences 2016 15 Pages PDF
Abstract
This paper focuses on developing new algorithms for distributed cooperative learning based on zero-gradient-sum (ZGS) optimization in a network setting. Specifically, the feedforward neural network with random weights (FNNRW) is introduced to train on data distributed across multiple learning agents, and each agent runs the program on a subset of the entire data. In this scheme, there is no requirement for a fusion center, due to, e.g., practical limitations, security, or privacy reasons. The centralized FNNRW problem is reformulated into an equivalent separable form with consensus constraints among nodes and is solved by the ZGS-based distributed optimization strategy, which theoretically guarantees convergence to the optimal solution. The proposed method is more effective than the existing methods using the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM) strategies. It is simple and requires less computational and communication resources, which is well suited for potential applications, such as wireless sensor networks, artificial intelligence, and computational biology, involving datasets that are often extremely large, high-dimensional and located on distributed data sources. We show simulation results on both synthetic and real-world datasets.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,