کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
407555 678155 2013 7 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Architecture selection for networks trained with extreme learning machine using localized generalization error model
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
پیش نمایش صفحه اول مقاله
Architecture selection for networks trained with extreme learning machine using localized generalization error model
چکیده انگلیسی

The initial localized generalization error model (LGEM) aims to find an upper bound of error between a target function and a radial basis function neural network (RBFNN) within a neighborhood of the training samples. The contribution of LGEM can be briefly described as that the generalization error is less than or equal to the summation of three terms: training error, stochastic sensitivity measure (SSM), and a constant. This paper extends the initial LGEM to a new LGEM model for single-hidden layer feed-forward neural networks (SLFNs) trained with extreme learning machine (ELM) which is a type of new training algorithms without iterations. The development of this extended LGEM can provide some useful guidelines for improving the generalization ability of SLFNs trained with ELM. An algorithm for architecture selection of the SLFNs is also proposed based on the extended LGEM. Experimental results on a number of benchmark data sets show that an approximately optimal architecture in terms of number of neurons of a SLFN can be found using our method. Furthermore, the experimental results on eleven UCI data sets show that the proposed method is effective and efficient.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neurocomputing - Volume 102, 15 February 2013, Pages 3–9
نویسندگان
, , , ,