کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
404845 677457 2007 11 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Natural learning in NLDA networks
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
پیش نمایش صفحه اول مقاله
Natural learning in NLDA networks
چکیده انگلیسی

Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function JJ can be written as the expectation ∇J(W)=E[Z(X,W)]∇J(W)=E[Z(X,W)] of a certain random vector ZZ and defining then I=E[Z(X,W)Z(X,W)t]I=E[Z(X,W)Z(X,W)t] as the Fisher information matrix in this case. This definition of II formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA JJ criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss–Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neural Networks - Volume 20, Issue 5, July 2007, Pages 610–620
نویسندگان
, ,