Article ID Journal Published Year Pages File Type
563134 Signal Processing 2013 15 Pages PDF
Abstract

•Spatial–temporal features in a video can be well detected by the structure tensor.•Different types of regions of input videos are detected and fused independently.•The fusion method performs well in spatial–temporal extraction and consistency.•The fusion method can also fuse videos with dynamic background images.

With three dimensional uniform discrete curvelet transform (3D-UDCT) and spatial–temporal structure tensor, a novel video fusion algorithm for videos with static background images is proposed in this paper. Firstly, the 3D-UDCT is employed to decompose source videos into many subbands with different scales and directions. Secondly, corresponding subbands of source videos are merged with different fusion schemes. Finally, the fused video is obtained by the reverse 3D-UDCT. Especially, when bandpass directional subband coefficients are merged, a spatial–temporal salience detection algorithm based on the structure tensor is performed. And each subband is divided into three types of regions, i.e., regions with temporal moving targets, regions with spatial features of background images, and smooth regions. Then different fusion rules are designed for each type of regions. Compared with some existing fusion methods, the proposed fusion algorithm can not only extract more spatial–temporal salient features from input videos but also perform better in spatial–temporal consistency. In addition, the proposed fusion algorithm can also be extended to fuse videos with dynamic background images by a simple modification. Several sets of experimental results demonstrate the feasibility and validity of the proposed fusion method.

Keywords
Related Topics
Physical Sciences and Engineering Computer Science Signal Processing
Authors
, , ,