| کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
|---|---|---|---|---|
| 6939015 | 1449968 | 2018 | 18 صفحه PDF | دانلود رایگان |
عنوان انگلیسی مقاله ISI
Maximal granularity structure and generalized multi-view discriminant analysis for person re-identification
ترجمه فارسی عنوان
ساختار دانه گرایی حداکثر و تجزیه و تحلیل تجزیه و تحلیل چندگانه تعمیم عمومی برای شناسایی فرد
دانلود مقاله + سفارش ترجمه
دانلود مقاله ISI انگلیسی
رایگان برای ایرانیان
کلمات کلیدی
شناسایی فرد، توصیفگر ساختار حداکثر دانه بندی، تجزیه و تحلیل متمایز چندبعدی متمرکز، سازگاری نمایندگی،
موضوعات مرتبط
مهندسی و علوم پایه
مهندسی کامپیوتر
چشم انداز کامپیوتر و تشخیص الگو
چکیده انگلیسی
This paper proposes a novel descriptor called Maximal Granularity Structure Descriptor (MGSD) for feature representation and an effective metric learning method called Generalized Multi-view Discriminant Analysis based on representation consistency (GMDA-RC) for person re-identification (Re-ID). The proposed descriptor of MGSD captures rich local structural information from overlapping macro-pixels in an image, analyzes the horizontal occurrence of multi-granularity and maximizes the occurrence to extract a robust representation for viewpoint changes. As a result, the proposed descriptor of MGSD can obtain rich person appearance whilst being robust against different condition changes. Besides, considering multi-view information, we present a new GMDA-RC for different views, inspired by the observation that different views share similar data structures. The proposed metric learning method of GMDA-RC seeks multiple discriminant common spaces for multiple views by jointly learning multiple view-specific linear transforms. Finally, we evaluate the proposed method of (MGSD+GMDA-RC) on three publicly available person Re-ID datasets: VIPeR, CUHK-01 and Wide Area Re-ID dataset (WARD). For the VIPeR and CUHK-01, the experimental results show that our method significantly outperforms the state-of-the-art methods, achieving the rank-1 matching rates of 67.09%, 70.61%, and the improvements of 17.41%, 5.34%, respectively. For the WARD, we consider different pairwise camera views (camera 1-2, camera 1-3, camera 2-3) and our method can achieve the rank-1 matching rates of 64.33%, 59.42%, 70.32%, increasing of 5.68%, 11.04%, 9.06% compared with the state-of-the-art methods, respectively.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Pattern Recognition - Volume 79, July 2018, Pages 79-96
Journal: Pattern Recognition - Volume 79, July 2018, Pages 79-96
نویسندگان
Zhao Cairong, Wang Xuekuan, Miao Duoqian, Wang Hanli, Zheng Weishi, Xu Yong, Zhang David,
