Article ID Journal Published Year Pages File Type
532414 Pattern Recognition 2012 11 Pages PDF
Abstract

We propose an on-line algorithm to segment foreground from background in videos captured by a moving camera. In our algorithm, temporal model propagation and spatial model composition are combined to generate foreground and background models, and likelihood maps are computed based on the models. After that, an energy minimization technique is applied to the likelihood maps for segmentation. In the temporal step, block-wise models are transferred from the previous frame using motion information, and pixel-wise foreground/background likelihoods and labels in the current frame are estimated using the models. In the spatial step, another block-wise foreground/background models are constructed based on the models and labels given by the temporal step, and the corresponding per-pixel likelihoods are also generated. A graph-cut algorithm performs segmentation based on the foreground/background likelihood maps, and the segmentation result is employed to update the motion of each segment in a block; the temporal model propagation and the spatial model composition step are re-evaluated based on the updated motions, by which the iterative procedure is implemented. We tested our framework with various challenging videos involving large camera and object motions, significant background changes and clutters.

► We propose an on-line method to segment moving objects in moving camera environment. ► The proposed algorithm is a block-based iterative appearance modeling technique. ► Our appearance modeling is more resistant to image registration errors. ► We analyzed the performance of our method qualitatively and quantitatively.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , ,