Article ID Journal Published Year Pages File Type
407344 Neurocomputing 2012 5 Pages PDF
Abstract

In this paper, a penalty term is added to the conventional error function to improve the generalization of the Ridge Polynomial neural network. In order to choose appropriate learning parameters, we propose a monotonicity theorem and two convergence theorems including a weak convergence and a strong convergence for the synchronous gradient method with penalty for the neural network. The experimental results of the function approximation problem illustrate the above theoretical results are valid.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,