Article ID Journal Published Year Pages File Type
412976 Neurocomputing 2009 7 Pages PDF
Abstract

Perceptrons, proposed in the seminal paper McCulloch–Pitts of 1943, have remained of interest to neural network community because of their simplicity and usefulness in classifying linearly separable data and can be viewed as implementing iterative procedures for “solving” linear inequalities. Gradient descent and conjugate gradient methods, normally used for linear equalities, can be used to solve linear inequalities by simple modifications that have been proposed in the literature but not been analyzed completely. This paper applies a recently proposed control-inspired approach to the design of iterative steepest descent and conjugate gradient algorithms for perceptron training in batch mode, by regarding certain parameters of the training/algorithm as controls and then using a control Liapunov technique to choose appropriate values of these parameters.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,