Article ID Journal Published Year Pages File Type
409509 Neurocomputing 2015 6 Pages PDF
Abstract

In this paper, the deterministic convergence of the batch back-propagation algorithm with penalty (BPAP) is proved under certain relaxed conditions for the activation function, the learning rate and the stationary point set of the error function. Both weak and strong convergence results are established. The boundedness of the weights in the training procedure is also proved in a simple and clear way. As a result, the usual requirements on the boundedness of the weights to guarantee the convergence are removed. Simulation results for an approximation problem are presented to support our theoretical findings.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , , ,