کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
6863602 | 1439516 | 2018 | 18 صفحه PDF | دانلود رایگان |
عنوان انگلیسی مقاله ISI
Detecting action tubes via spatial action estimation and temporal path inference
ترجمه فارسی عنوان
تشخیص لوله های عمل از طریق تخمین عمل فضایی و استنتاج مسیر زمانی
دانلود مقاله + سفارش ترجمه
دانلود مقاله ISI انگلیسی
رایگان برای ایرانیان
کلمات کلیدی
یادگیری عمیق، تشخیص عمل، محلی سازی فضایی، شبکه پیشنهاد پیشنهاد منطقه پیگیری توسط تشخیص،
موضوعات مرتبط
مهندسی و علوم پایه
مهندسی کامپیوتر
هوش مصنوعی
چکیده انگلیسی
In this paper, we address the problem of action detection in unconstrained video clips. Our approach starts from action detection on object proposals at each frame, then aggregates the frame-level detection results belonging to the same actor across the whole video via linking, associating, and tracking to generate action tubes that are spatially compact and temporally continuous. To achieve the target, a novel action detection model with two-stream architecture is firstly proposed, which utilizes the fused feature from both appearance and motion cues and can be trained end-to-end. Then, the association of the action paths is formulated as a maximum set coverage problem with the results of action detection as a priori. We utilize an incremental search algorithm to obtain all the action proposals at one-pass operation with great efficiency, especially while dealing with the video of long duration or with multiple action instances. Finally, a tracking-by-detection scheme is designed to further refine the generated action paths. Extensive experiments on three validation datasets, UCF-Sports, UCF-101 and J-HMDB, show that the proposed approach advances state-of-the-art action detection performance in terms of both accuracy and proposal quality.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neurocomputing - Volume 311, 15 October 2018, Pages 65-77
Journal: Neurocomputing - Volume 311, 15 October 2018, Pages 65-77
نویسندگان
Nannan Li, Jingjia Huang, Thomas Li, Huiwen Guo, Ge Li,