Article ID Journal Published Year Pages File Type
10148891 Expert Systems with Applications 2019 36 Pages PDF
Abstract
The feed-forward neural network (FNN) has drawn great interest in many applications due to its universal approximation capability. In this paper, a novel algorithm for training FNNs is proposed using the concept of sparse representation. The major advantage of the proposed algorithm is that it is capable of training the initial network and optimizing the network structure simultaneously. The proposed algorithm consists of two core stages: structure optimization and weight update. In the structure optimization stage, the sparse representation technique is employed to select important hidden neurons that minimize the residual output error. In the weight update stage, a dictionary learning based method is implemented to update network weights by maximizing the output diversity from hidden neurons. This weight-updating process is designed to improve the performance of the structure optimization. Based on several benchmark classification and regression problems, we present experimental results comparing the proposed algorithm with state-of-the-art methods. Simulation results show that the proposed algorithm offers comparative performance in terms of the final network size and generalization ability.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,