Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
527099 | Image and Vision Computing | 2011 | 13 Pages |
A Hierarchical Model Fusion (HMF) framework for object tracking in video sequences is presented. The Bayesian tracking equations are extended to account for multiple object models. With these equations as a basis a particle filter algorithm is developed to efficiently cope with the multi-modal distributions emerging from cluttered scenes. The update of each object model takes place hierarchically so that the lower dimensional object models, which are updated first, guide the search in the parameter space of the subsequent object models to relevant regions thus reducing the computational complexity. A method for object model adaptation is also developed. We apply the proposed framework by fusing salient points, blobs, and edges as features and verify experimentally its effectiveness in challenging conditions.
Graphical abstractFigure optionsDownload full-size imageDownload high-quality image (312 K)Download as PowerPoint slideHighlights► A Hierarchical Model Fusion framework for visual tracking is presented. ► A particle filter approximation algorithm is developed for the framework. ► The algorithm fuses multiple cues while maintaining efficiency. ► The experiments show 5 to 10 times increase in performance compared to SIR.