Article ID Journal Published Year Pages File Type
410685 Neurocomputing 2012 10 Pages PDF
Abstract

Output feedback is crucial for autonomous and parameterized pattern generation with reservoir networks. Read-out learning affects the output feedback loop and can lead to error amplification. Regularization is therefore important for both generalization and reduction of error amplification. We show that regularization of the reservoir and the read-out layer reduces the risk of error amplification, mitigates parameter dependency and boosts the task-specific performance of reservoir networks with output feedback. We discuss the deeper connection between regularization of the learning process and stability of the trained network.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,