Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4636317 | Applied Mathematics and Computation | 2007 | 8 Pages |
Abstract
In this paper, we develop an adaptive nonmonotone memory gradient method for unconstrained optimization. The novelty of this method is that the stepsize can be adjusted according to the characteristics of the objective function. We show the strong global convergence of the proposed method without requiring Lipschitz continuous of the gradient. Our numerical experiments indicate the method is very encouraging.
Related Topics
Physical Sciences and Engineering
Mathematics
Applied Mathematics
Authors
Zhensheng Yu, Weiguo Zhang, Baofeng Wu,