کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
529116 | 869631 | 2012 | 9 صفحه PDF | دانلود رایگان |

In this paper we present a redundancy reduction based approach for computational bottom-up visual saliency estimation. In contrast to conventional methods, our approach determines the saliency by filtering out redundant contents instead of measuring their significance. To analyze the redundancy of self-repeating spatial structures, we propose a non-local self-similarity based procedure. The result redundancy coefficient is used to compensate the Shannon entropy, which is based on statistics of pixel intensities, to generate the bottom-up saliency map of the visual input. Experimental results on three publicly available databases demonstrate that the proposed model is highly consistent with the subjective visual attention.
► The bottom-up visual saliency is estimated by spatial redundancy reduction.
► A non-local scheme is proposed to measure the spatial redundancy.
► The model is adaptive to both natural and conceptual images.
Journal: Journal of Visual Communication and Image Representation - Volume 23, Issue 7, October 2012, Pages 1158–1166