Article ID Journal Published Year Pages File Type
409494 Neurocomputing 2015 21 Pages PDF
Abstract

The backpropagation (BP) algorithm is the most commonly utilized training strategy for a feed-forward artificial neural network (FFANN). The BP algorithm, however, always leads to the problems of low convergence rate, high energy and poor generalization capability of FFANN. In this paper, motivated by the sparsity property of human neuron’ responses, we introduce a new sparse-response BP (SRBP) to improve the capacity of a FFANN by enforcing sparsity to its hidden units through imposing a supplemental L1 term on them. The FFANN model learned from our algorithm is closely related to the real human and thus its mechanism fully complies with the human nervous system, i.e., sparse representation and architectural depth. Experiments on several datasets demonstrate that SRBP yields good performances on convergence rate, energy saving and generalization capability.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , , ,