Article ID Journal Published Year Pages File Type
407383 Neurocomputing 2016 10 Pages PDF
Abstract

Single-hidden layer feedforward neural networks with randomly fixed hidden neurons (RHN-SLFNs) have been shown, both theoretically and experimentally, to be fast and accurate. Besides, it is well known that deep architectures can find higher-level representations, thus can potentially capture relevant higher-level abstractions. But most of current deep learning methods require a long time to solve a non-convex optimization problem. In this paper, we propose a stacked deep neural network, St-URHN-SLFNs, via unsupervised RHN-SLFNs according to stacked generalization philosophy to deal with unsupervised problems. Empirical study on a wide range of data sets demonstrates that the proposed algorithm outperforms the state-of-the-art unsupervised algorithms in terms of accuracy. On the computational effectiveness, the proposed algorithm runs much faster than other deep learning methods, i.e. deep autoencoder (DA) and stacked autoencoder (SAE), and little slower than other methods.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,