Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
404904 | Neural Networks | 2006 | 8 Pages |
Abstract
Neural networks (NN) are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand, this flexibility can cause overfitting and can hamper the generalization of neural networks. Many approaches to regularizing NN have been suggested but most of them are based on ad hoc arguments. Employing the principle of transformation invariance, we derive a general prior in accordance with the Bayesian probability theory for feed-forward networks. An optimal network is determined by Bayesian model comparison, verifying the applicability of this approach. Additionally the prior presented affords cell pruning.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Udo v. Toussaint, Silvio Gori, Volker Dose,