Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6940068 | Pattern Recognition | 2016 | 13 Pages |
Abstract
Human-following robots are important for home, industrial and battlefield applications. To effectively interact with human, a robot needs to locate a person's position and understand his/her motion. Vision based techniques are widely used. However, due to the close distance between human and robot, and the limitation in a camera's field of view, only part of a human body can be observed most of the time. As such, the human motion observed by a robot is inherently ambiguous. Simultaneously identifying the body part being observed and the motion the person undergoing is a challenging problem, and has not been well studied in the past. In this paper, we propose a novel method solving the body part and motion identification problem in a unified framework. The relative position of an observed part with respect to the whole body and the motion type are treated as continuous and discrete labels, respectively, and the most probable labeling is inferred by structured learning. A fast part-distribution estimation is introduced to reduce the computational cost. The proposed approach is able to identify different body parts without explicitly building models for each single part, and to recognize the motion with only partial body observations. The proposed approach is evaluated using actual videos captured by a human-following robot as well as the synthesized videos from the public UCF50 dataset, originally developed for action recognition. The result demonstrates the effectiveness of the approach.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Vision and Pattern Recognition
Authors
Sihao Ding, Qiang Zhai, Ying Li, Junda Zhu, Yuan F. Zheng, Dong Xuan,