Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6958919 | Signal Processing | 2016 | 6 Pages |
Abstract
In this paper we propose a multimodal feature learning mechanism based on deep networks (i.e., stacked contractive autoencoders) for video classification. Considering the three modalities in video, i.e., image, audio and text, we first build one Stacked Contractive Autoencoder (SCAE) for each single modality, whose outputs will be joint together and fed into another Multimodal Stacked Contractive Autoencoder (MSCAE). The first stage preserves intra-modality semantic relations and the second stage discovers inter-modality semantic correlations. Experiments on real world dataset demonstrate that the proposed approach achieves better performance compared with the state-of-the-art methods.
Related Topics
Physical Sciences and Engineering
Computer Science
Signal Processing
Authors
Yanan Liu, Xiaoqing Feng, Zhiguang Zhou,