کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
6938679 | 1449963 | 2018 | 38 صفحه PDF | دانلود رایگان |
عنوان انگلیسی مقاله ISI
Learning visual and textual representations for multimodal matching and classification
ترجمه فارسی عنوان
یادگیری دیدگاه های بصری و متنی برای تطبیق و طبقه بندی چندجمله ای
دانلود مقاله + سفارش ترجمه
دانلود مقاله ISI انگلیسی
رایگان برای ایرانیان
کلمات کلیدی
بینش و زبان، تطبیق چندجملهای، طبقه بندی چندجملهای، یادگیری عمیق،
موضوعات مرتبط
مهندسی و علوم پایه
مهندسی کامپیوتر
چشم انداز کامپیوتر و تشخیص الگو
چکیده انگلیسی
Multimodal learning has been an important and challenging problem for decades, which aims to bridge the modality gap between heterogeneous representations, such as vision and language. Unlike many current approaches which only focus on either multimodal matching or classification, we propose a unified network to jointly learn multimodal matching and classification (MMC-Net) between images and texts. The proposed MMC-Net model can seamlessly integrate the matching and classification components. It first learns visual and textual embedding features in the matching component, and then generates discriminative multimodal representations in the classification component. Combining the two components in a unified model can help in improving their performance. Moreover, we present a multi-stage training algorithm by minimizing both of the matching and classification loss functions. Experimental results on four well-known multimodal benchmarks demonstrate the effectiveness and efficiency of the proposed approach, which achieves competitive performance for multimodal matching and classification compared to state-of-the-art approaches.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Pattern Recognition - Volume 84, December 2018, Pages 51-67
Journal: Pattern Recognition - Volume 84, December 2018, Pages 51-67
نویسندگان
Yu Liu, Li Liu, Yanming Guo, Michael S. Lew,