کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
408371 | 679025 | 2007 | 8 صفحه PDF | دانلود رایگان |

Neural networks have become very useful tools for input–output knowledge discovery. However, some of the most powerful schemes require very complex machines and, thus, a large amount of calculation. This paper presents a general technique to reduce the computational burden associated with the operational phase of most neural networks that calculate their output as a weighted sum of terms, which comprises a wide variety of schemes, such as Multi-Net or Radial Basis Function networks. Basically, the idea consists on sequentially evaluating the sum terms, using a series of thresholds which are associated with the confidence that a partial output will coincide with the overall network classification criterion. Furthermore, we design some procedures for conveniently sorting out the network units, so that the most important ones are evaluated first. The possibilities of this strategy are illustrated with some experiments on a benchmark of binary classification problems, using RealAdaboost and RBF networks, which show that important computational savings can be achieved without significant degradation in terms of recognition accuracy.
Journal: Neurocomputing - Volume 70, Issues 16–18, October 2007, Pages 2775–2782