کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
536969 | 870651 | 2013 | 18 صفحه PDF | دانلود رایگان |
The ability to predict, given an image or a video, where a human might fixate elements of a viewed scene has long been of interest in the vision community. However, one point that is not addressed by the great majority of computational models is the variability exhibited by different observers when viewing the same scene, or even by the same subject along different trials. Here we present a model of gaze shift behavior which is driven by a composite foraging strategy operating over a time varying visual landscape and accounts for such variability.The system performs a deterministic walk if in a neighborhood of the current position of the gaze there exists a point of sufficiently high saliency; otherwise the search is driven by a Langevin equation whose random term is generated by an α-stableα-stable distribution.Results of the simulations on complex videos from the publicly available University of South California CRCNS eye-1 dataset are compared with eye-tracking data and show that the model yields gaze shift motor behaviors that exhibits statistics similar to those exhibited by human observers.
► A stochastic model of eye guidance on complex videos is proposed.
► Eye's behavior is that of a random walker following a composite foraging strategy.
► An extensive stage provides global relocations of gaze in the form of Lèvy flights.
► An intensive stage allows for local visual information gathering.
► The model mimics the variability of human gaze shift patterns and statistics.
Journal: Signal Processing: Image Communication - Volume 28, Issue 8, September 2013, Pages 949–966