کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
404381 677419 2011 12 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Quasi-objective nonlinear principal component analysis
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
پیش نمایش صفحه اول مقاله
Quasi-objective nonlinear principal component analysis
چکیده انگلیسی

By means of mathematical analysis and numerical experimentation, this study shows that the problems of non-uniqueness of solutions and data over-fitting, that plague the multilayer feedforward neural network for NonLinear Principal Component Analysis (NLPCA), are caused by inappropriate architecture of the neural network. A simplified two-hidden-layer feedforward neural network, which has no encoding layer and no bias term in the mathematical definitions of bottleneck and output neurons, is proposed to conduct NLPCA. This new, compact NLPCA model alleviates the aforementioned problems encountered when using the more complex neural network architecture for NLPCA.The numerical experiments are based on a data set generated from a well-known nonlinear system, the Lorenz chaotic attractor. Given the same number of bottleneck neurons or reduced dimensions, the compact NLPCA model effectively characterizes and represents the Lorenz attractor with significantly fewer parameters than the relevant three-hidden-layer feedforward neural network for NLPCA.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neural Networks - Volume 24, Issue 2, March 2011, Pages 159–170
نویسندگان
, ,