کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
405827 | 678035 | 2016 | 11 صفحه PDF | دانلود رایگان |
This paper extends an efficient and powerful approach for learning dynamics with Recurrent Neural Networks (RNNs). We use standard RNNs with one fully connected hidden layer instead of prestructured RNNs. By combining a variant of Differential Evolution (DE) with least squares optimization, the optimized RNNs are able to learn several common Multiple Superimposed Oscillator (MSO) multiple orders of magnitude better than the achieved results published so far. Furthermore, for new and even more difficult instances up to twelve superimposed waves, our setup achieves lower error rates than reported previously for the best system on just eight waves. Further findings regarding our approach are deepened and a variety of additional experiments are performed, revealing interesting insights into the behavior of the trained networks. MSO reproduction studies show that the resulting RNNs generalize well: the optimized RNNs are able to identify those subcomponents currently active in an MSO sequence, and they are able to do so even when the subcomponents are phase-shifted and have different amplitudes compared to the original training sequence. On the other hand, the generalization to other oscillation frequencies is possible only to a limited extent. In this case, the RNNs fall back to a weighted combination of signals that reflect the frequency spectrum of the originally learned dynamics after the washout phase. Finally, evaluations of the system on the nonlinear Hénon attractor suggest that our approach can also be used to predict and generate non-linear dynamics more effectively than previous approaches.
Journal: Neurocomputing - Volume 192, 5 June 2016, Pages 128–138