کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
525618 | 869001 | 2014 | 13 صفحه PDF | دانلود رایگان |
• Problem formulation: given a number of possible pointed targets, compute the target that the user points to.
• Estimate head pose by visually tracking the off-plane rotations of the face.
• Recognize two different hand pointing gestures (point left and point right).
• Model the problem using the Dempster–Shafer theory of evidence.
• Use Demspster’s rule of combination to fuse information and derive the pointed target.
In this paper we address an important issue in human–robot interaction, that of accurately deriving pointing information from a corresponding gesture. Based on the fact that in most applications it is the pointed object rather than the actual pointing direction which is important, we formulate a novel approach which takes into account prior information about the location of possible pointed targets. To decide about the pointed object, the proposed approach uses the Dempster–Shafer theory of evidence to fuse information from two different input streams: head pose, estimated by visually tracking the off-plane rotations of the face, and hand pointing orientation. Detailed experimental results are presented that validate the effectiveness of the method in realistic application setups.
Journal: Computer Vision and Image Understanding - Volume 120, March 2014, Pages 1–13