Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4977467 | Signal Processing | 2017 | 7 Pages |
Abstract
Auto-Encoders, as one representative deep learning method, has demonstrated to achieve superior performance in many applications. Hence, it is drawing more and more attentions and variants of Auto-Encoders have been reported including Contractive Auto-Encoders, Denoising Auto-Encoders, Sparse Auto-Encoders and Nonnegativity Constraints Auto-Encoders. Recently, a Discriminative Auto-Encoders is reported to improve the performance by considering the within class and between class information. In this paper, we propose the Large Margin Auto-Encoders (LMAE) to further boost the discriminability by enforcing different class samples to be large marginally distributed in hidden feature space. Particularly, we stack the single-layer LMAE to construct a deep neural network to learn proper features. And finally we put these features into a softmax classifier for classification. Extensive experiments are conducted on the MNIST dataset and the CIFAR-10 dataset for classification respectively. The experimental results demonstrate that the proposed LMAE outperforms the traditional Auto-Encoders algorithm.
Related Topics
Physical Sciences and Engineering
Computer Science
Signal Processing
Authors
Liu Weifeng, Ma Tengzhou, Xie Qiangsheng, Tao Dapeng, Cheng Jun,