کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
4944913 1438015 2016 15 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights
ترجمه فارسی عنوان
الگوریتم صفر-گرادینت برای یادگیری تعاونی توزیع شده با استفاده از یک شبکه عصبی فیدبک با وزنهای تصادفی
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
چکیده انگلیسی
This paper focuses on developing new algorithms for distributed cooperative learning based on zero-gradient-sum (ZGS) optimization in a network setting. Specifically, the feedforward neural network with random weights (FNNRW) is introduced to train on data distributed across multiple learning agents, and each agent runs the program on a subset of the entire data. In this scheme, there is no requirement for a fusion center, due to, e.g., practical limitations, security, or privacy reasons. The centralized FNNRW problem is reformulated into an equivalent separable form with consensus constraints among nodes and is solved by the ZGS-based distributed optimization strategy, which theoretically guarantees convergence to the optimal solution. The proposed method is more effective than the existing methods using the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM) strategies. It is simple and requires less computational and communication resources, which is well suited for potential applications, such as wireless sensor networks, artificial intelligence, and computational biology, involving datasets that are often extremely large, high-dimensional and located on distributed data sources. We show simulation results on both synthetic and real-world datasets.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Information Sciences - Volume 373, 10 December 2016, Pages 404-418
نویسندگان
, , ,