Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4970089 | Pattern Recognition Letters | 2017 | 10 Pages |
Abstract
This paper proposes speeding up of convolutional neural networks using Winner Takes All (WTA) hashing. More specifically, WTA hash is used to identify relevant units and only these are computed, effectively ignoring the rest of the units. We show that the proposed method reduces the computational cost of forward and backward propagation for large fully connected layers. This allows us to train classification layers with a large number of units without the associated time penalty. We present different experiments on a dataset with 21K classes to gauge the effectiveness of this proposal. We concretely show that only a small amount of computation is required to train a classification layer. Then we measure and showcase the ability of WTA in identifying the required computation. Furthermore we compare this approach to the baseline and demonstrate a 6 fold speed up during training without compromising on the performance.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Vision and Pattern Recognition
Authors
Amir H. Bakhtiary, Agata Lapedriza, David Masip,