کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
6939550 1449971 2018 35 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
What-and-where to match: Deep spatially multiplicative integration networks for person re-identification
ترجمه فارسی عنوان
چه چیزی و چه جایی برای مطابقت: شبکه عمیق مکمل چند منظوره برای شناسایی فرد
کلمات کلیدی
یکپارچه سازی چند گانه، شبکه های عصبی انعقادی، شبکه عصبی مکرر، شناسایی فرد،
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر چشم انداز کامپیوتر و تشخیص الگو
چکیده انگلیسی
Matching pedestrians across disjoint camera views, known as person re-identification (re-id), is a challenging problem that is of importance to visual recognition and surveillance. Most existing methods exploit local regions with spatial manipulation to perform matching in local correspondences. However, they essentially extract fixed representations from pre-divided regions for each image and then perform matching based on these extracted representations. For models in this pipeline, local finer patterns that are crucial to distinguish positive pairs from negative ones cannot be captured, and thus making them underperformed. In this paper, we propose a novel deep multiplicative integration gating function, which answers the question of what-and-where to match for effective person re-id. To address what to match, our deep network emphasizes common local patterns by learning joint representations in a multiplicative way. The network comprises two Convolutional Neural Networks (CNNs) to extract convolutional activations, and generates relevant descriptors for pedestrian matching. This leads to flexible representations for pair-wise images. To address where to match, we combat the spatial misalignment by performing spatially recurrent pooling via a four-directional recurrent neural network to impose spatial dependency over all positions with respect to the entire image. The proposed network is designed to be end-to-end trainable to characterize local pairwise feature interactions in a spatially aligned manner. To demonstrate the superiority of our method, extensive experiments are conducted over three benchmark data sets: VIPeR, CUHK03 and Market-1501.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Pattern Recognition - Volume 76, April 2018, Pages 727-738
نویسندگان
, , , ,