Article ID Journal Published Year Pages File Type
9653597 Neurocomputing 2005 19 Pages PDF
Abstract
We provide insights into the organization and dynamics of recurrent online training algorithms by comparing real time recurrent learning (RTRL) with a new continuous-time online algorithm. The latter is derived in the spirit of a recent approach introduced by Atiya and Parlos (IEEE Trans. Neural Networks 11 (3) (2000) 697), which leads to non-gradient search directions. We refer to this approach as Atiya-Parlos learning (APRL) and interpret it with respect to its strategy to minimize the standard quadratic error. Simulations show that the different approaches of RTRL and APRL lead to qualitatively different weight dynamics. A formal analysis of the one-output behavior of APRL further reveals that the weight dynamics favor a functional partition of the network into a fast output layer and a slower dynamical reservoir, whose rates of weight change are closely coupled.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,