Article ID Journal Published Year Pages File Type
4947693 Neurocomputing 2017 13 Pages PDF
Abstract
Gait recognition has been proved useful in human identification at a distance. But many variations such as view, clothing, carrying condition make gait recognition is still challenging in real applications. The variations make it is hard to extract invariant feature to distinguish different subjects. For view variation, one view transformation model can be employed to convert the gait feature from one view to another. Most existing models need to estimate the view angle first, and can work for only one view pair. They can not convert multi-view data to one specific view efficiently. Other variations also need some specific models to handle. We employed one deep model based on auto-encoder for invariant gait extraction. The model can synthesize gait feature in a progressive way by stacked multi-layer auto-encoders. The unique advantage is that it can extract invariant gait feature using only one model, and the extracted feature is robust to view, clothing and carrying condition variation. The proposed method is evaluated on two large gait datasets, CASIA Gait Dataset B and SZU RGB-D Gait Dataset. The experimental results show that the proposed method can achieve state-of-the-art performance by only one uniform model.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , , ,