Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
4943591 | Expert Systems with Applications | 2017 | 34 Pages |
Abstract
Visual tracking methods are mostly based on single stage state estimation that limitedly caters to precise localization of target under dynamic environment such as occlusion, object deformation, rotation, scaling and cluttered background. In order to address these issues, we introduce a novel multi-stage coarse-to-fine tracking framework with quick adaptation to environment dynamics. The key idea of our work is to propose two-stage estimation of object state and to develop an adaptive fusion model. Coarse estimation of object state is achieved using optical flow and multiple fragments are generated around this approximation. Precise localization of object is obtained through evaluation of these fragments using three complementary cues. Adaptation of proposed tracker to dynamic environment changes is quick due to incorporation of context sensitive cue reliability, which encompass its direct application for development of expert system for video surveillance. In addition, proposed framework caters to object rotation and scaling through a random walk state model and rotation invariant features. The proposed tracker is evaluated over eight- benchmarked color video sequences and competitive results are obtained. As an average of the outcomes, we achieved mean center location error (in pixels) of 6.791 and F-measure of 0.78. Results demonstrate that proposed tracker not only outperforms various state-of-the-art trackers but also effectively caters to various dynamic environments.
Keywords
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Gurjit Singh Walia, Saim Raza, Anjana Gupta, Rajesh Asthana, Kuldeep Singh,