کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
406528 678092 2014 9 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
A modified gradient learning algorithm with smoothing L1/2 regularization for Takagi–Sugeno fuzzy models
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
پیش نمایش صفحه اول مقاله
A modified gradient learning algorithm with smoothing L1/2 regularization for Takagi–Sugeno fuzzy models
چکیده انگلیسی

A popular and feasible approach to determine the appropriate size of a neural network is to remove unnecessary connections from an oversized network. The advantage of L1/2 regularization has been recognized for sparse modeling. However, the nonsmoothness of L1/2 regularization may lead to oscillation phenomenon. An approach with smoothing L1/2 regularization is proposed in this paper for Takagi–Sugeno (T–S) fuzzy models, in order to improve the learning efficiency and to promote sparsity of the models. The new smoothing L1/2 regularizer removes the oscillation. Besides, it also enables us to prove the weak and strong convergence results for T–S fuzzy neural networks with zero-order. Furthermore, a relationship between the learning rate parameter and the penalty parameter is given to guarantee the convergence. Simulation results are provided to support the theoretical findings, and they show the superiority of the smoothing L1/2 regularization over the original L1/2 regularization.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neurocomputing - Volume 138, 22 August 2014, Pages 229–237
نویسندگان
, , , , ,