Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6864182 | Neurocomputing | 2018 | 35 Pages |
Abstract
Human action recognition from RGBD videos has attracted much attention recently in the area of computer vision. Mainstream methods focus on designing highly discriminative features, which suffer from high dimension. As for human experience, discriminative parts, such as hands or legs, play an important role for identifying human actions. Motivated by this phenomenon, we propose a Random Forest (RF) Out-of-Bag (OB) estimation based approach to extract discriminative parts for each action. First, all the features of joint-based parts are separately fed into the RF Classifier. The OB estimation of each part is used to evaluate the discrimination of the joints in the part. Second, joints with high discrimination for the whole dataset are selected to design feature. Therefore, feature dimension is reduced efficiently. Experiments conducted on MSR Action 3D and MSR Daily Activity3D dataset show that our proposed approach outperforms state-of-the-art methods in accuracy with lower feature dimensions.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
Min Huang, Guo-Rong Cai, Hong-Bo Zhang, Sheng Yu, Dong-Ying Gong, Dong-Lin Cao, Shaozi Li, Song-Zhi Su,