Article ID Journal Published Year Pages File Type
411859 Neurocomputing 2015 19 Pages PDF
Abstract

Visual tracking usually requires an object appearance model that is robust to changing illumination, partial occlusion, large pose and other factors encountered in video. Most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them in presence of significant variation of the object appearance model or challenging situations. To address this issue, we propose a robust tracking algorithm based on discriminative projective non-negative matrix factorization and a robust inter-frame matching schema. The models of target and background are presented by the basis matrices of non-negative matrix factorization. In order to adapt the basis matrices to the variation of foreground and background during tracking, an incremental learning method is employed to update the basis matrices. A robust inter-frame matching by bidirectional method and Delaunay triangulation is adopted to improve the proposal distribution of particle filter, thus enhancing the performance of tracking. Template matching is used to correct the drift of the target if the result of discriminative part is unreliable. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experiments on some publicly available benchmarks of video sequences demonstrate the effectiveness and robustness of our approach.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,