کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
407527 678146 2015 8 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Efficient incremental construction of RBF networks using quasi-gradient method
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
پیش نمایش صفحه اول مقاله
Efficient incremental construction of RBF networks using quasi-gradient method
چکیده انگلیسی

Artificial Neural Networks have been found to be very efficient universal approximators. Single Layer Feedforward Networks (SLFN) are the most popular and easy to train. The neurons in these networks can use both sigmoidal functions and radial basis functions (RBF) as activation functions. Both functions have been shown to work very efficiently. Sigmoidal networks are already very well described in the literature. This paper will focus on the construction of a SLFN architecture using RBF neurons. There are many algorithms that are used to construct or train networks to solve function approximation problems. In this paper, an algorithm which is a modification of the Incremental Extreme Learning Machine (I-ELM) family of algorithms is proposed. The proposed algorithm eliminates randomness in the learning process with respect to center positions and widths of the RBF neurons. To do this, the input with the highest error magnitude is saved during error calculation and then used as the center for the next incrementally added neuron. Then the radius of the new neuron is iteratively chosen using Nelder–Mead׳s Simplex method. This allows the universal approximation properties of I-ELM to be preserved while greatly reducing the sizes of the trained RBF networks.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neurocomputing - Volume 150, Part B, 20 February 2015, Pages 349–356
نویسندگان
, ,