کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
409063 679053 2008 8 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Natural conjugate gradient training of multilayer perceptrons
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
پیش نمایش صفحه اول مقاله
Natural conjugate gradient training of multilayer perceptrons
چکیده انگلیسی

Natural gradient (NG) descent, arguably the fastest on-line method for multilayer perceptron (MLP) training, exploits the “natural” Riemannian metric that the Fisher information matrix defines in the MLP weight space. It also accelerates ordinary gradient descent in a batch setting but then the Fisher matrix essentially coincides with the Gauss–Newton approximation of the Hessian of the MLP square error function and NG is thus related to the Levenberg–Marquardt (LM) method, which may explain its speed-up with respect to standard gradient descent. However, even this comparison is advantageous for NG descent as it should have a linear convergence in a Riemannian weight space compared to the superlinear one of the LM method in the Euclidean weight space. This suggests that it may be interesting to consider superlinear methods for MLP training in a Riemannian setting. In this work we shall discuss how to introduce a natural conjugate gradient (CG) method for MLP training. While a fully Riemannian formulation would result in an extremely costly procedure, we shall make some simplifying assumptions that should give descent directions with properties similar to those of standard CG descent. Moreover, we will also show numerically that natural CG may lead to a faster convergence to better minima, although with a greater cost than that of standard CG that, nevertheless, may be alleviated using a diagonal natural CG variant.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neurocomputing - Volume 71, Issues 13–15, August 2008, Pages 2499–2506
نویسندگان
, ,