Article ID Journal Published Year Pages File Type
6941031 Pattern Recognition Letters 2016 10 Pages PDF
Abstract
Bottom-up methods and general Bayesian framework for saliency detection commonly suffer from two drawbacks. First, they are sensitive to background noise, thus background regions similar to objects are also highlighted. Second, they only consider appearance features and thus object with several different parts will not be highlighted uniformly. In this paper, we propose a novel and unified geodesic weighted Bayesian model which considers spatial relationship by reformulating Bayes' formula. First, we infer a more precise initial salient regions via fully connected CRF model. Second, to highlight the whole object uniformly, we learn a robust measure of region similarity which describes the probability of two regions belonging to the same object, so regions belonging to the same object will be given similar saliency value. Third, using our learnt region similarity as edge weight, we construct an undirected weighted graph to compute geodesic distance of regions. Regions with short geodesic distance from initial salient regions will be attached more importance, thus suppressing background noise. By using results of existing methods as prior distribution, our model can integrate into all methods and improve their performance. Experiments on benchmark datasets demonstrate that our model can significantly improve the quality of saliency detection.
Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , ,