کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
6865097 | 1439554 | 2018 | 36 صفحه PDF | دانلود رایگان |
عنوان انگلیسی مقاله ISI
SRNN: Self-regularized neural network
دانلود مقاله + سفارش ترجمه
دانلود مقاله ISI انگلیسی
رایگان برای ایرانیان
موضوعات مرتبط
مهندسی و علوم پایه
مهندسی کامپیوتر
هوش مصنوعی
پیش نمایش صفحه اول مقاله

چکیده انگلیسی
In this work, we address to boost the discriminative capability of deep neural network by alleviating the over-fitting problem. Previous works often deal with the problem of learning a neural network by optimizing one or more objective functions with some existing regularization methods (such as dropout, weight decay, stochastic pooling, data augmentation, etc.). We argue that these approaches may be difficult to further improve the classification performance of a neural network, due to not well employing its own learned knowledge. In this paper, we introduce a self-regularized strategy for learning a neural network, named as a Self-Regularized Neural Network (SRNN). The intuition behind the SRNN is that the sample-wise soft targets of a neural network may have potentials to drag its own neural network out of its local optimum. More specifically, an initial neural network is firstly pre-trained by optimizing one or more objective functions with ground truth labels. We then gradually mine sample-wise soft targets, which enables to reveal the correlation/similarity among classes predicted from its own neural network. The parameters of neural network are further updated for fitting its sample-wise soft targets. This self-regularization learning procedure minimizes the objective function by integrating the sample-wise soft targets of neural network and the ground truth label of training samples. Three characteristics in this SRNN are summarized as: (1) gradually mining the learned knowledge from a single neural network, and then correcting and enhancing this part of learned knowledge, resulting in the sample-wise soft targets; (2) regularly optimizing the parameters of this neural network with their sample-wise soft targets; (3) boosting the discriminative capability of a neural network with the self-regularization strategy. Extensive experiments on four public datasets, i.e., CIFAR-10, CIFAR-100, Caltech101 and MIT, well demonstrate the effectiveness of the proposed SRNN for image classification.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neurocomputing - Volume 273, 17 January 2018, Pages 260-270
Journal: Neurocomputing - Volume 273, 17 January 2018, Pages 260-270
نویسندگان
Chunyan Xu, Jian Yang, Junbin Gao, Hanjiang Lai, Shuicheng Yan,