کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
6856797 1437970 2018 13 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
A salient dictionary learning framework for activity video summarization via key-frame extraction
ترجمه فارسی عنوان
یک چارچوب یادگیری فرهنگی برجسته برای خلاصه فیلم فعالیت با استفاده از استخراج کلیدی فریم
کلمات کلیدی
خلاصه فیلم، استخراج قاب کلیدی، مشکل انتخاب مجموعه زیر ستون، تماشای ویدئو، الگوریتم ژنتیک،
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر هوش مصنوعی
چکیده انگلیسی
Recently, dictionary learning methods for unsupervised video summarization have surpassed traditional video frame clustering approaches. This paper addresses static summarization of videos depicting activities, which possess certain recurrent properties. In this context, a flexible definition of an activity video summary is proposed, as the set of key-frames that can both reconstruct the original, full-length video and simultaneously represent its most salient parts. Both objectives can be jointly optimized across several information modalities. The two criteria are merged into a “salient dictionary” learning task that is proposed as a strict definition of the video summarization problem, encapsulating many existing algorithms. Three specific, novel video summarization methods are derived from this definition: the Numerical, the Greedy and the Genetic Algorithm. In all formulations, the reconstruction term is modeled algebraically as a Column Subset Selection Problem (CSSP), while the saliency term is modeled as an outlier detection problem, a low-rank approximation problem, or a summary dispersion maximization problem. In quantitative evaluation, the Greedy Algorithm seems to provide the best balance between speed and overall performance, with the faster Numerical Algorithm a close second. All the proposed methods outperform a baseline clustering approach and two competing state-of-the-art static video summarization algorithms.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Information Sciences - Volume 432, March 2018, Pages 319-331
نویسندگان
, , ,