کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
6863649 1439516 2018 9 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Boundedness and convergence of split complex gradient descent algorithm with momentum and regularizer for TSK fuzzy models
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
پیش نمایش صفحه اول مقاله
Boundedness and convergence of split complex gradient descent algorithm with momentum and regularizer for TSK fuzzy models
چکیده انگلیسی
This paper investigates the split complex gradient descent based neuro-fuzzy algorithm with self-adaptive momentum and L2 regularizer for training TSK (Takagi-Sugeno-Kang) fuzzy inference models. The major threat for disposing complex data with fuzzy system is contradiction of boundedness and analyticity in the complex domain, as expressed by Liouville's theorem. The proposed algorithm operates a couple of real-valued functions and splits the complex variables into real part and imaginary part. Dynamical momentum is included in the learning mechanism to promote learning speed. L2 regularizer is also added to control the magnitude of the weight parameters. Furthermore, a detailed convergence analysis of the proposed algorithm is fully studied. The monotonic decreasing property of the error function and convergence of the weight sequence are guaranteed. Plus a mild condition, strong convergence of the weight sequence is deduced. Finally, the simulation results are also demonstrated to verify the theoretical analysis results.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neurocomputing - Volume 311, 15 October 2018, Pages 270-278
نویسندگان
, , ,