کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
6865947 679603 2015 11 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Online training and its convergence for faulty networks with multiplicative weight noise
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
پیش نمایش صفحه اول مقاله
Online training and its convergence for faulty networks with multiplicative weight noise
چکیده انگلیسی
A recent article showed that the objective function of the online weight noise injection algorithm is not equal to the training set error of faulty radial basis function (RBF) networks under the weight noise situation (Ho et al., 2010). Hence the online weight noise injection algorithm is not able to optimize the training set error of faulty networks with multiplicative weight noise. This paper proposes an online learning algorithm to tolerate multiplicative weight noise. Two learning rate cases, namely fixed learning rate and adaptive learning rate, are investigated. For the fixed learning rate case, we show that if the learning rate μ is less than 2/(σb2+maxi‖ϕ(xi)|2), then the online algorithm converges, where xi׳s are the training input vectors, σ2b is the variance of the multiplicative weight noise, ϕ(xi)=[ϕ1(xi),…,ϕM(xi)]T, and ϕj(·) is the output of the j-th RBF node. In addition, as μ→0, the trained weight vector tends to the optimal solution. For the adaptive learning rate case, let the learning rates {μk} be a decreasing sequence and limk→∞μk=0, where k is the index of learning cycles. We prove that if ∑k=1∞μk=∞ and ∑k=1∞μk2<∞, then the weight vector converges to the optimal solution. Our simulation results show that the performance of the proposed algorithm is better than that of the conventional online approaches, such as the online weight decay and weight noise injection.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neurocomputing - Volume 155, 1 May 2015, Pages 53-61
نویسندگان
, , , ,