Article ID Journal Published Year Pages File Type
535231 Pattern Recognition Letters 2009 6 Pages PDF
Abstract

Choosing a useful combination of input variables and an appropriate complexity of the model is an essential task in nonlinear regression analysis because of the risk of overfitting. This article provides a workable solution for the multilayer perceptron model. An initial structure of the model, including all the input variables, is fixed in the beginning. Only the most useful input variables and hidden nodes remain effective when the model is fitted with the proposed penalization method. The method is tested on three benchmark data sets. Experimental results show that the removal of useless input variables and hidden nodes from the model improves its generalization capability. In addition, the proposed method compares favorably with respect to other penalization methods.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, ,