کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
404014 677381 2014 7 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Batch gradient method with smoothing L1/2L1/2 regularization for training of feedforward neural networks
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
پیش نمایش صفحه اول مقاله
Batch gradient method with smoothing L1/2L1/2 regularization for training of feedforward neural networks
چکیده انگلیسی

The aim of this paper is to develop a novel method to prune feedforward neural networks by introducing an L1/2L1/2 regularization term into the error function. This procedure forces weights to become smaller during the training and can eventually removed after the training. The usual L1/2L1/2 regularization term involves absolute values and is not differentiable at the origin, which typically causes oscillation of the gradient of the error function during the training. A key point of this paper is to modify the usual L1/2L1/2 regularization term by smoothing it at the origin. This approach offers the following three advantages: First, it removes the oscillation of the gradient value. Secondly, it gives better pruning, namely the final weights to be removed are smaller than those produced through the usual L1/2L1/2 regularization. Thirdly, it makes it possible to prove the convergence of the training. Supporting numerical examples are also provided.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neural Networks - Volume 50, February 2014, Pages 72–78
نویسندگان
, , , , , ,