Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6865947 | Neurocomputing | 2015 | 11 Pages |
Abstract
A recent article showed that the objective function of the online weight noise injection algorithm is not equal to the training set error of faulty radial basis function (RBF) networks under the weight noise situation (Ho et al., 2010). Hence the online weight noise injection algorithm is not able to optimize the training set error of faulty networks with multiplicative weight noise. This paper proposes an online learning algorithm to tolerate multiplicative weight noise. Two learning rate cases, namely fixed learning rate and adaptive learning rate, are investigated. For the fixed learning rate case, we show that if the learning rate μ is less than 2/(Ïb2+maxiâÏ(xi)|2), then the online algorithm converges, where xi׳s are the training input vectors, Ï2b is the variance of the multiplicative weight noise, Ï(xi)=[Ï1(xi),â¦,ÏM(xi)]T, and Ïj(·) is the output of the j-th RBF node. In addition, as μâ0, the trained weight vector tends to the optimal solution. For the adaptive learning rate case, let the learning rates {μk} be a decreasing sequence and limkââμk=0, where k is the index of learning cycles. We prove that if âk=1âμk=â and âk=1âμk2<â, then the weight vector converges to the optimal solution. Our simulation results show that the performance of the proposed algorithm is better than that of the conventional online approaches, such as the online weight decay and weight noise injection.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Zifa Han, Rui-Bin Feng, Wai Yan Wan, Chi-Sing Leung,