Article ID Journal Published Year Pages File Type
527018 Image and Vision Computing 2012 10 Pages PDF
Abstract

In this paper, we present a method for human full-body pose estimation from depth data that can be obtained using Time of Flight (ToF) cameras or the Kinect device. Our approach consists of robustly detecting anatomical landmarks in the 3D data and fitting a skeleton body model using constrained inverse kinematics. Instead of relying on appearance-based features for interest point detection that can vary strongly with illumination and pose changes, we build upon a graph-based representation of the depth data that allows us to measure geodesic distances between body parts. As these distances do not change with body movement, we are able to localize anatomical landmarks independent of pose. For differentiation of body parts that occlude each other, we employ motion information, obtained from the optical flow between subsequent intensity images. We provide a qualitative and quantitative evaluation of our pose tracking method on ToF and Kinect sequences containing movements of varying complexity.

Graphical abstractFigure optionsDownload full-size imageDownload high-quality image (151 K)Download as PowerPoint slideHighlights► Full-body human pose estimation from depth images. ► Extraction of anatomical landmarks using geodesic distances. ► Disambiguation of self-occlusions using optical flow. ► Pose estimation for general movements without training data. ► Experimental evaluation on Time of Flight and Kinect data.

Related Topics
Physical Sciences and Engineering Computer Science Computer Vision and Pattern Recognition
Authors
, , , ,