کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
526711 | 869205 | 2016 | 13 صفحه PDF | دانلود رایگان |
• Proposed a method combining three complementary detection tasks to detect AU events rather than frames.
• Performed best in four public datasets that differ in complexity.
• Proposed a novel event-based evaluation metric.
• Experimental results are reported in both conventional frame-based and new event-based metrics.
• Proposed an efficient search approach to better suit long image sequences.
Automatic facial action unit (AU) detection from video is a long-standing problem in facial expression analysis. Existing work typically poses AU detection as a classification problem between frames or segments of positive and negative examples, and emphasizes the use of different features or classifiers. In this paper, we propose a novel AU event detection method, Cascade of Tasks (CoT), which combines the use of different tasks (i.e., frame-level detection, segment-level detection and transition detection). We train CoT sequentially embracing diversity to ensure robustness and generalization to unseen data. Unlike conventional frame-based metrics that evaluate frames independently, we propose a new event-based metric to evaluate detection performance at the event-level. The event-based metric measures the ratio of correctly detected AU events instead of frames. We show how the CoT method consistently outperforms state-of-the-art approaches in both frame-based and event-based metrics, across four datasets that differ in complexity: CK+, FERA, RU-FACS and GFT.
Graphical AbstractFigure optionsDownload high-quality image (236 K)Download as PowerPoint slide
Journal: Image and Vision Computing - Volume 51, July 2016, Pages 36–48