کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
402177 | 676872 | 2016 | 9 صفحه PDF | دانلود رایگان |
• Learning the semantic representation using neural network architecture.
• The neural network is trained via pre-training and fine-tuning phase.
• The learned semantic level feature is incorporated into a LTR framework.
In community question answering (cQA), users pose queries (or questions) on portals like Yahoo! Answers which can then be answered by other users who are often knowledgeable on the subject. cQA is increasingly popular on the Web, due to its convenience and effectiveness in connecting users with queries and those with answers. In this article, we study the problem of finding previous queries (e.g., posed by other users) which may be similar to new queries, and adapting their answers as the answers to the new queries. A key challenge here is to the bridge the lexical gap between new queries and old answers. For example, “company” in the queries may correspond to “firm” in the answers. To address this challenge, past research has proposed techniques similar to machine translation that “translate” old answers to ones using the words in the new queries. However, a key limitation of these works is that they assume queries and answers are parallel texts, which is hardly true in reality. As a result, the translated or rephrased answers may not look intuitive.In this article, we propose a novel approach to learn the semantic representation of queries and answers by using a neural network architecture. The learned semantic level features are finally incorporated into a learning to rank framework. We have evaluated our approach using a large-scale data set. Results show that the approach can significantly outperform existing approaches.
Journal: Knowledge-Based Systems - Volume 93, 1 February 2016, Pages 75–83