کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
535840 870392 2012 5 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Efficient and effective algorithms for training single-hidden-layer neural networks
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر چشم انداز کامپیوتر و تشخیص الگو
پیش نمایش صفحه اول مقاله
Efficient and effective algorithms for training single-hidden-layer neural networks
چکیده انگلیسی

Recently there have been renewed interests in single-hidden-layer neural networks (SHLNNs). This is due to its powerful modeling ability as well as the existence of some efficient learning algorithms. A prominent example of such algorithms is extreme learning machine (ELM), which assigns random values to the lower-layer weights. While ELM can be trained efficiently, it requires many more hidden units than is typically needed by the conventional neural networks to achieve matched classification accuracy. The use of a large number of hidden units translates to significantly increased test time, which is more valuable than training time in practice. In this paper, we propose a series of new efficient learning algorithms for SHLNNs. Our algorithms exploit both the structure of SHLNNs and the gradient information over all training epochs, and update the weights in the direction along which the overall square error is reduced the most. Experiments on the MNIST handwritten digit recognition task and the MAGIC gamma telescope dataset show that the algorithms proposed in this paper obtain significantly better classification accuracy than ELM when the same number of hidden units is used. For obtaining the same classification accuracy, our best algorithm requires only 1/16 of the model size and thus approximately 1/16 of test time compared with ELM. This huge advantage is gained at the expense of 5 times or less the training cost incurred by the ELM training.


► Exploit the structure of SHLNNs by plugging-in the upper-layer solution to criterion.
► Exploit the gradient information over all training epochs.
► Update the weights in the direction along which the overall square error is reduced the most.
► Significantly outperforms ELM with the same number of hidden units, or.
► Perform as good as ELM but with 1/16 of model size and test time.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Pattern Recognition Letters - Volume 33, Issue 5, 1 April 2012, Pages 554–558
نویسندگان
, ,