Article ID Journal Published Year Pages File Type
407356 Neurocomputing 2013 11 Pages PDF
Abstract

Content-based video copy detection has grabbed an increasing attention in the video search community due to the rapid proliferation of video copies over the Internet. Most existing techniques of video copy detection focus on spatial-based video transformations such as brightness enhancement and caption superimposition. It can be accomplished efficiently by the clip-level matching technique, which summarizes the full content of a video clip as a single signature. However, temporal-based transformations involving random insertion and deletion operations pose a great challenge to clip-level matching. Although some studies employ the frame-level matching technique to deal with temporal-based transformations, the high computation complexity might make them impractical in real applications. In this paper, we present a novel search method to address the above-mentioned problems. For a given query video clip, it is partitioned into short segments, then each of which linearly scans over the video clips in a dataset. Rather than exhaustive search, we derive the similarity upper bounds of these query segments as a filter to skip unnecessary matching. In addition, we present a min-hash-based inverted indexing mechanism to find candidate clips from the dataset. Our experimental results demonstrate that the proposed method is robust and efficient to deal with temporal-based video copies.

Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , ,