کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
6941121 | 870156 | 2015 | 6 صفحه PDF | دانلود رایگان |
عنوان انگلیسی مقاله ISI
A robust framework for tracking simultaneously rigid and non-rigid face using synthesized data
ترجمه فارسی عنوان
یک چارچوب قوی برای ردیابی چهره به طور همزمان و سفت و سخت با استفاده از داده های سنتز شده
دانلود مقاله + سفارش ترجمه
دانلود مقاله ISI انگلیسی
رایگان برای ایرانیان
کلمات کلیدی
موضوعات مرتبط
مهندسی و علوم پایه
مهندسی کامپیوتر
چشم انداز کامپیوتر و تشخیص الگو
چکیده انگلیسی
This paper presents a robust framework for simultaneously tracking rigid pose and non-rigid animation of a single face with a monocular camera. Our proposed method consists of two phases: training and tracking. In the training phase, using automatically detected landmarks and the three-dimensional face model Candide-3, we built a cohort of synthetic face examples with a large range of the three axial rotations. The representation of a face's appearance is a set of local patches of landmarks that are characterized by Scale Invariant Feature Transform (SIFT) descriptors. In the tracking phase, we propose an original approach combining geometric and appearance models. The purpose of the geometric model is to provide a SIFT baseline matching between the current frame and an adaptive set of keyframes for rigid parameter estimation. The appearance model uses nearest synthetic examples of the training set to re-estimate rigid and non-rigid parameters. We found a tracking capability up to 90° of vertical axial rotation, and our method is robust even in the presence of fast movements, illumination changes and tracking losses. Numerical results on the rigid and non-rigid parameter sets are reported using several annotated public databases. Compared to other published algorithms, our method provides an excellent compromise between rigid and non-rigid parameter accuracies. The approach has some potential, providing good pose estimation (average error less than 4° on the Boston University Face Tracking dataset) and landmark tracking precision (6.3 pixel error compared to 6.8 of one of state-of-the-art methods on Talking Face video).
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Pattern Recognition Letters - Volume 65, 1 November 2015, Pages 75-80
Journal: Pattern Recognition Letters - Volume 65, 1 November 2015, Pages 75-80
نویسندگان
Ngoc-Trung Tran, Fakhreddine Ababsa, Maurice Charbit,