Article ID Journal Published Year Pages File Type
413073 Neurocomputing 2006 19 Pages PDF
Abstract

An adaptive algorithm for function minimization based on conjugate gradients for the problem of finding linear discriminant functions in pattern classification is developed. The algorithm converges to a solution in both consistent and inconsistent cases in a finite number of steps on several datasets. We have applied our algorithm and compared its performance with the adaptive versions of the Ho-Kashyap procedure (AHK). We have also compared the batch version of the algorithm with the batch mode AHK. The results show that the proposed adaptive conjugate gradient algorithm (CGA) gives vastly superior performance in terms of both the number of training cycles required and the classification rate. Also, the batch mode CGA performs much better than the batch mode AHK.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,