Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6901375 | Procedia Computer Science | 2017 | 7 Pages |
Abstract
There're many effective architectures of the artificial neural network(ANN). For which the training is a hard work. The cost for training an ANN increases exponentially when the ANN gets deeper or wider. We therefore propose a novel architecture, the Hybrid Learning Network(HLN), to achieve a fast learning with good stablity. The HLN can learn from both labeled data and unlabeled data at the same time in a hybrid learning manner. It uses a Self Organizing Map unified by the specially designed nonlinear function as the sparsity mask for a hidden layer to improve the training speed. We experiment our architecture on a synthetic dataset to test its regression capability against the traditional architecture, the result is promising.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Science (General)
Authors
Ying Liu, Chao Xiang,