Article ID Journal Published Year Pages File Type
411192 Neurocomputing 2007 9 Pages PDF
Abstract

Previous papers have noted the difficulty in obtaining neural models which are stable under simulation when trained using prediction-error-based methods. Here the differences between series–parallel and parallel identification structures for training neural models are investigated. The effect of the error surface shape on training convergence and simulation performance is analysed using a standard algorithm operating in both training modes. A combined series–parallel/parallel training scheme is proposed, aiming to provide a more effective means of obtaining accurate neural simulation models. Simulation examples show the combined scheme is advantageous in circumstances where the solution space is known or suspected to be complex.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,