کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
533077 | 870056 | 2017 | 12 صفحه PDF | دانلود رایگان |
• We propose to decompose a human behavior sequence into relevant motion segments.
• Each motion segment is described by both human motion and depth appearance around hands.
• We model the sequence dynamics using a Dynamic Naive Bayesian classifier.
• We evaluate the method for various types of behavior: gesture, action and activity.
• The challenge of online detection of successive behaviors is also investigated.
In this paper, we propose a framework for analyzing and understanding human behavior from depth videos. The proposed solution first employs shape analysis of the human pose across time to decompose the full motion into short temporal segments representing elementary motions. Then, each segment is characterized by human motion and depth appearance around hand joints to describe the change in pose of the body and the interaction with objects. Finally, the sequence of temporal segments is modeled through a Dynamic Naive Bayes classifier, which captures the dynamics of elementary motions characterizing human behavior. Experiments on four challenging datasets evaluate the potential of the proposed approach in different contexts, including gesture or activity recognition and online activity detection. Competitive results in comparison with state-of-the-art methods are reported.
Journal: Pattern Recognition - Volume 61, January 2017, Pages 222–233