Article ID Journal Published Year Pages File Type
408442 Neurocomputing 2011 4 Pages PDF
Abstract

In this paper, the convergence of a new back-propagation algorithm with adaptive momentum is analyzed when it is used for training feedforward neural networks with a hidden layer. A convergence theorem is presented and sufficient conditions are offered to guarantee both weak and strong convergence result. Compared with existing results, our convergence result is of deterministic nature and we do not require the error function to be quadratic or uniformly convex.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,