Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
527752 | Computer Vision and Image Understanding | 2013 | 16 Pages |
Recognizing human actions from a stream of unsegmented sensory observations is important for a number of applications such as surveillance and human-computer interaction. A wide range of graphical models have been proposed for these tasks, and are typically extensions of the generative hidden Markov models (HMMs) or their discriminative counterpart, conditional random fields (CRFs). These extensions typically address one of three key limitations in the basic HMM/CRF formalism – unrealistic models for the duration of a sub-event, not encoding interactions among multiple agents directly and not modeling the inherent hierarchical organization of activities. In our work, we present a family of graphical models that generalize such extensions and simultaneously model event duration, multi agent interactions and hierarchical structure. We also present general algorithms for efficient learning and inference in such models based on local variational approximations. We demonstrate the effectiveness of our framework by developing graphical models for applications in automatic sign language (ASL) recognition, and for gesture and action recognition in videos. Our methods show results comparable to state-of-the-art in the datasets we consider, while requiring far fewer training examples compared to low-level feature based methods.
► Family of graphical models for simultaneously modeling hierarchical structure, event durations and multi agent interactions. ► Fast algorithms for inference based on local variational approximations. ► Parameter learning using embedded Viterbi learning using unannotated, unsegmented training sequences. ► Expectation-Maximization based algorithm for parameter learning in directed graphical models. ► Experimental results for continuous sign language, gesture and human action recognition.