کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
535355 | 870341 | 2014 | 7 صفحه PDF | دانلود رایگان |
• Classic BoW (bag-of-words) just counts words and lacks spatio-temporal constraints.
• Proposed extension of BoW (t-BoW) considers aggregated temporal word co-occurrences.
• t-BoW is conceptually simpler than other existing BoW extensions.
• The BoW pipeline is altered minimally and no additional learning schemes are required.
• t-BoW is effective and outperforms plain BoW and other extensions.
The bag-of-words (BoW) representation has successfully been used for human action recognition from videos. However, one limitation of the standard BoW is that it ignores spatial and temporal relationships between the visual words. Although several approaches have been proposed to deal with this issue, we propose an extension which is arguably simpler yet quite effective. The proposed representation, t-BoW, captures only temporal relationships between pairs of words in an aggregated way by counting co-occurrences at several temporal differences. Unlike other approaches, neither spatial nor hierarchical information is accounted for explicitly, and no significant change is required in the quantization or classification procedures. Performance improvements over the traditional BoW and other BoW extensions are experimentally observed in the KTH, the ADL, the Keck, and the HMDB51 action/gestures datasets.
Journal: Pattern Recognition Letters - Volume 49, 1 November 2014, Pages 224–230