| Article ID | Journal | Published Year | Pages | File Type |
|---|---|---|---|---|
| 412221 | Neurocomputing | 2014 | 8 Pages |
Abstract
In this work a specific preconditioning technique is developed to improve the convergence speed of a discrete-time recurrent neural network for quadratic optimization with general linear constraints. The discrete-time network is a model recently published with the broadest range of applicability to various optimization problems and constraints. The proposed preconditioning technique is shown to improve the convergence speed of the model significantly, and thus contribute to enhance the application of the model in these problems. In addition to the theoretical analysis, extensive experimental results are presented to illustrate the technique developed, and to show the significant improvement attained.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
María José Pérez-Ilzarbe,
