Article ID Journal Published Year Pages File Type
407770 Neurocomputing 2012 10 Pages PDF
Abstract

In this paper, we study the convergence of an online gradient method with inner-product penalty and adaptive momentum for feedforward neural networks, assuming that the training samples are permuted stochastically in each cycle of iteration. Both two-layer and three-layer neural network models are considered, and two convergence theorems are established. Sufficient conditions are proposed to prove weak and strong convergence results. The algorithm is applied to the classical two-spiral problem and identification of Gabor function problem to support these theoretical findings.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,