Article ID Journal Published Year Pages File Type
408444 Neurocomputing 2011 6 Pages PDF
Abstract

In this paper, the deterministic convergence of an online gradient method with penalty and momentum is investigated for training two-layer feedforward neural networks. The monotonicity of the new error function with the penalty term in the training iteration is firstly proved. Under this conclusion, we show that the weights are uniformly bounded during the training process and the algorithm is deterministically convergent. Sufficient conditions are also provided for both weak and strong convergence results.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,