کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
404557 | 677437 | 2008 | 6 صفحه PDF | دانلود رایگان |
In this article we propose a new insight into the field of feed-forward neural network modeling. We considered the framework of the nonlinear regression models to construct computer-aided DD-optimal designs for this class of neural models. These designs can be seen as a particular case of active learning. Classical algorithms are used to construct local approximate and local exact DD-optimal designs. We observed that the so-called generalization of a neural network (the equivalent term, “predictive ability”, is more familiar to statisticians) is improved when the DD-efficiency of the chosen “learning set design” increases. We thus showed that the DD-efficiency criterion can be the basis for a better strategy for the neural network learning phase than the standard uniform random strategy encountered in this field. Our proposition is based on two possible strategies: a One-Step Strategy or a Full Sequential Strategy. Intensive Monte Carlo simulations with an academic example show that the DD-optimal “learning set design” strategies proposed lead to a substantial improvement in the use of neural network models.
Journal: Neural Networks - Volume 21, Issue 7, September 2008, Pages 945–950