Article ID Journal Published Year Pages File Type
409528 Neurocomputing 2006 9 Pages PDF
Abstract

We provide a stability analysis based on nonlinear feedback theory for the recently introduced backpropagation–decorrelation (BPDC) recurrent learning algorithm which adapts only the output weights of a possibly large network and therefore can learn in O(N)O(N). Using a small gain criterion, we derive a simple sufficient stability inequality. The condition can be monitored online to assure that the recurrent network is stable and can in principle be applied to any network adapting only the output weights. Based on these results the BPDC learning is further enhanced with an efficient online rescaling algorithm to stabilize the network while adapting. In simulations we find that this mechanism improves learning in the provably stable domain. As byproduct we show that BPDC is highly competitive on standard data sets including the recently introduced CATS benchmark data [CATS data. URL: http://www.cis.hut.fi/lendasse/competition/competition.html].

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
,