Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
412044 | Neurocomputing | 2015 | 6 Pages |
This paper describes an easy-to-use system to estimate the shape of a human body and his/her clothes. The system uses a Kinect to capture the human׳s RGB and depth information from different views. Using the depth data, a non-rigid deformation method is devised to compensate motions between different views, thus to align and complete the dressed shape. Given the reconstructed dressed shape, the skin regions are recognized by a skin classifier from the RGB images, and these skin regions are taken as a tight constraints for the body estimation. Subsequently, the body shape is estimated from the skin regions of the dressed shape by leveraging a statistical model of human body. After the body estimation, the body shape is non-rigidly deformed to fit the dressed shape, so as to extract the cloth field of the dressed shape. We demonstrate our system and the therein algorithms by several experiments. The results show the effectiveness of the proposed method.