Article ID Journal Published Year Pages File Type
4948052 Neurocomputing 2017 37 Pages PDF
Abstract
This paper aims to develop distributed learning algorithms for feedforward neural networks with random weights (FNNRWs) by using event-triggered communication schemes. Based on this scheme, the communication process of each agent is driven by a trigger condition where the agents exchange information in an asynchronous manner, only when it is crucially required. To this end, the centralized FNNRW problem is cast as a set of distributed subproblems with consensus constrains imposed on the desired parameters and solved following the discrete-time zero-gradient-sum (ZGS) strategy. An event-triggered communication scheme is introduced to the ZGS-based FNNRW algorithm in order to avoid unnecessary transmission costs. This is particularly useful for the case when communication resource is limited. It is proved that the proposed event-triggered approach is exponentially convergent if the design parameter is chosen properly under strongly connected and weight-balanced agent interactions. Two numerical simulation examples are provided to verify the effectiveness of the proposed algorithm.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,