کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
6957891 1451923 2018 12 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Performance limits of stochastic sub-gradient learning, part II: Multi-agent case
ترجمه فارسی عنوان
محدودیت های عملکرد یادگیری غلط آمیز تصادفی، قسمت دوم: مورد چند عامل
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر پردازش سیگنال
چکیده انگلیسی
The analysis in Part I [1] revealed interesting properties for subgradient learning algorithms in the context of stochastic optimization. These algorithms are used when the risk functions are non-smooth or involve non-differentiable components. They have been long recognized as being slow converging methods. However, it was revealed in Part I [1] that the rate of convergence becomes linear for stochastic optimization problems, with the error iterate converging at an exponential rate αi to within an O(μ)−neighborhood of the optimizer, for some α ∈ (0, 1) and small step-size μ. The conclusion was established under weaker assumptions than the prior literature and, moreover, several important problems were shown to satisfy these weaker assumptions automatically. These results revealed that sub-gradient learning methods have more favorable behavior than originally thought. The results of Part I [1] were exclusive to single-agent adaptation. The purpose of current Part II is to examine the implications of these discoveries when a collection of networked agents employs subgradient learning as their cooperative mechanism. The analysis will show that, despite the coupled dynamics that arises in a networked scenario, the agents are still able to attain linear convergence in the stochastic case; they are also able to reach agreement within O(μ) of the optimizer.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Signal Processing - Volume 144, March 2018, Pages 253-264
نویسندگان
, ,