کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
527035 | 869274 | 2008 | 14 صفحه PDF | دانلود رایگان |

We present an extension for variable length Markov models (VLMMs) to allow for modelling of continuous input data and show that the generative properties of these VLMMs are a powerful tool for dealing with real world tracking issues. We explore methods for addressing the temporal correspondence problem in the context of a practical hand tracker, which is essential to support expectation in task-based control using these behavioural models. The hand tracker forms a part of a larger multi-component distributed system, providing 3-D hand position data to a gesture recogniser client. We show how the performance of such a hand tracker can be improved by using feedback from the gesture recogniser client. In particular, feedback based on the generative extrapolation of the recogniser's internal models is shown to help the tracker deal with mid-term occlusion. We also show that VLMMs can be used as a means to inform the prior in an expectation maximisation (EM) process used for joint spatial and temporal learning of image features.
Journal: Image and Vision Computing - Volume 26, Issue 1, 1 January 2008, Pages 39–52