Article ID Journal Published Year Pages File Type
4969521 Pattern Recognition 2017 11 Pages PDF
Abstract
Objective video quality assessment (VQA) plays an important role in controlling video quality. Most of the existing VQA methods measure motion-related temporal distortion based on optical-flow methods, which are not consistently reliable in modeling general visual dynamics. This paper presents a full-reference temporal distortion measure based on spacetime texture, a uniform and distributive descriptor of a broad set of spacetime structures. Our method measures the distortion of spacetime texture in video with a motion-tuning strategy, which effectively captures temporal distortion along the motion trajectories. Then it estimates self-information based visual saliency for spatial pooling by reusing the motion descriptors. We evaluated our method on two public VQA databases with a wide variety of distortion types, in which the videos were viewed on a large screen or mobile devices. The results show that our method correlates highly with the subjective quality and has high computational efficiency.
Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , ,